Dec  2 10:47:33 np0005542546 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  2 10:47:33 np0005542546 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  2 10:47:33 np0005542546 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 10:47:33 np0005542546 kernel: BIOS-provided physical RAM map:
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  2 10:47:33 np0005542546 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  2 10:47:33 np0005542546 kernel: NX (Execute Disable) protection: active
Dec  2 10:47:33 np0005542546 kernel: APIC: Static calls initialized
Dec  2 10:47:33 np0005542546 kernel: SMBIOS 2.8 present.
Dec  2 10:47:33 np0005542546 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  2 10:47:33 np0005542546 kernel: Hypervisor detected: KVM
Dec  2 10:47:33 np0005542546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  2 10:47:33 np0005542546 kernel: kvm-clock: using sched offset of 3002427782 cycles
Dec  2 10:47:33 np0005542546 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  2 10:47:33 np0005542546 kernel: tsc: Detected 2800.000 MHz processor
Dec  2 10:47:33 np0005542546 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  2 10:47:33 np0005542546 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  2 10:47:33 np0005542546 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  2 10:47:33 np0005542546 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  2 10:47:33 np0005542546 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  2 10:47:33 np0005542546 kernel: Using GB pages for direct mapping
Dec  2 10:47:33 np0005542546 kernel: RAMDISK: [mem 0x2e95d000-0x334a6fff]
Dec  2 10:47:33 np0005542546 kernel: ACPI: Early table checksum verification disabled
Dec  2 10:47:33 np0005542546 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  2 10:47:33 np0005542546 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 10:47:33 np0005542546 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 10:47:33 np0005542546 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 10:47:33 np0005542546 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  2 10:47:33 np0005542546 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 10:47:33 np0005542546 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  2 10:47:33 np0005542546 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  2 10:47:33 np0005542546 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  2 10:47:33 np0005542546 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  2 10:47:33 np0005542546 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  2 10:47:33 np0005542546 kernel: No NUMA configuration found
Dec  2 10:47:33 np0005542546 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  2 10:47:33 np0005542546 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  2 10:47:33 np0005542546 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  2 10:47:33 np0005542546 kernel: Zone ranges:
Dec  2 10:47:33 np0005542546 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  2 10:47:33 np0005542546 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  2 10:47:33 np0005542546 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 10:47:33 np0005542546 kernel:  Device   empty
Dec  2 10:47:33 np0005542546 kernel: Movable zone start for each node
Dec  2 10:47:33 np0005542546 kernel: Early memory node ranges
Dec  2 10:47:33 np0005542546 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  2 10:47:33 np0005542546 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  2 10:47:33 np0005542546 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 10:47:33 np0005542546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  2 10:47:33 np0005542546 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  2 10:47:33 np0005542546 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  2 10:47:33 np0005542546 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  2 10:47:33 np0005542546 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  2 10:47:33 np0005542546 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  2 10:47:33 np0005542546 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  2 10:47:33 np0005542546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  2 10:47:33 np0005542546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  2 10:47:33 np0005542546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  2 10:47:33 np0005542546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  2 10:47:33 np0005542546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  2 10:47:33 np0005542546 kernel: TSC deadline timer available
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Max. logical packages:   8
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Max. logical dies:       8
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Max. dies per package:   1
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Max. threads per core:   1
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Num. cores per package:     1
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Num. threads per package:   1
Dec  2 10:47:33 np0005542546 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  2 10:47:33 np0005542546 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  2 10:47:33 np0005542546 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  2 10:47:33 np0005542546 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  2 10:47:33 np0005542546 kernel: Booting paravirtualized kernel on KVM
Dec  2 10:47:33 np0005542546 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  2 10:47:33 np0005542546 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  2 10:47:33 np0005542546 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  2 10:47:33 np0005542546 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  2 10:47:33 np0005542546 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 10:47:33 np0005542546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  2 10:47:33 np0005542546 kernel: random: crng init done
Dec  2 10:47:33 np0005542546 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: Fallback order for Node 0: 0 
Dec  2 10:47:33 np0005542546 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  2 10:47:33 np0005542546 kernel: Policy zone: Normal
Dec  2 10:47:33 np0005542546 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  2 10:47:33 np0005542546 kernel: software IO TLB: area num 8.
Dec  2 10:47:33 np0005542546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  2 10:47:33 np0005542546 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  2 10:47:33 np0005542546 kernel: ftrace: allocated 193 pages with 3 groups
Dec  2 10:47:33 np0005542546 kernel: Dynamic Preempt: voluntary
Dec  2 10:47:33 np0005542546 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  2 10:47:33 np0005542546 kernel: rcu: #011RCU event tracing is enabled.
Dec  2 10:47:33 np0005542546 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  2 10:47:33 np0005542546 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  2 10:47:33 np0005542546 kernel: #011Rude variant of Tasks RCU enabled.
Dec  2 10:47:33 np0005542546 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  2 10:47:33 np0005542546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  2 10:47:33 np0005542546 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  2 10:47:33 np0005542546 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 10:47:33 np0005542546 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 10:47:33 np0005542546 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 10:47:33 np0005542546 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  2 10:47:33 np0005542546 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  2 10:47:33 np0005542546 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  2 10:47:33 np0005542546 kernel: Console: colour VGA+ 80x25
Dec  2 10:47:33 np0005542546 kernel: printk: console [ttyS0] enabled
Dec  2 10:47:33 np0005542546 kernel: ACPI: Core revision 20230331
Dec  2 10:47:33 np0005542546 kernel: APIC: Switch to symmetric I/O mode setup
Dec  2 10:47:33 np0005542546 kernel: x2apic enabled
Dec  2 10:47:33 np0005542546 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  2 10:47:33 np0005542546 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  2 10:47:33 np0005542546 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  2 10:47:33 np0005542546 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  2 10:47:33 np0005542546 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  2 10:47:33 np0005542546 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  2 10:47:33 np0005542546 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  2 10:47:33 np0005542546 kernel: Spectre V2 : Mitigation: Retpolines
Dec  2 10:47:33 np0005542546 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  2 10:47:33 np0005542546 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  2 10:47:33 np0005542546 kernel: RETBleed: Mitigation: untrained return thunk
Dec  2 10:47:33 np0005542546 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  2 10:47:33 np0005542546 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  2 10:47:33 np0005542546 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  2 10:47:33 np0005542546 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  2 10:47:33 np0005542546 kernel: x86/bugs: return thunk changed
Dec  2 10:47:33 np0005542546 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  2 10:47:33 np0005542546 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  2 10:47:33 np0005542546 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  2 10:47:33 np0005542546 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  2 10:47:33 np0005542546 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  2 10:47:33 np0005542546 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  2 10:47:33 np0005542546 kernel: Freeing SMP alternatives memory: 40K
Dec  2 10:47:33 np0005542546 kernel: pid_max: default: 32768 minimum: 301
Dec  2 10:47:33 np0005542546 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  2 10:47:33 np0005542546 kernel: landlock: Up and running.
Dec  2 10:47:33 np0005542546 kernel: Yama: becoming mindful.
Dec  2 10:47:33 np0005542546 kernel: SELinux:  Initializing.
Dec  2 10:47:33 np0005542546 kernel: LSM support for eBPF active
Dec  2 10:47:33 np0005542546 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  2 10:47:33 np0005542546 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  2 10:47:33 np0005542546 kernel: ... version:                0
Dec  2 10:47:33 np0005542546 kernel: ... bit width:              48
Dec  2 10:47:33 np0005542546 kernel: ... generic registers:      6
Dec  2 10:47:33 np0005542546 kernel: ... value mask:             0000ffffffffffff
Dec  2 10:47:33 np0005542546 kernel: ... max period:             00007fffffffffff
Dec  2 10:47:33 np0005542546 kernel: ... fixed-purpose events:   0
Dec  2 10:47:33 np0005542546 kernel: ... event mask:             000000000000003f
Dec  2 10:47:33 np0005542546 kernel: signal: max sigframe size: 1776
Dec  2 10:47:33 np0005542546 kernel: rcu: Hierarchical SRCU implementation.
Dec  2 10:47:33 np0005542546 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  2 10:47:33 np0005542546 kernel: smp: Bringing up secondary CPUs ...
Dec  2 10:47:33 np0005542546 kernel: smpboot: x86: Booting SMP configuration:
Dec  2 10:47:33 np0005542546 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  2 10:47:33 np0005542546 kernel: smp: Brought up 1 node, 8 CPUs
Dec  2 10:47:33 np0005542546 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  2 10:47:33 np0005542546 kernel: node 0 deferred pages initialised in 9ms
Dec  2 10:47:33 np0005542546 kernel: Memory: 7774476K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 607496K reserved, 0K cma-reserved)
Dec  2 10:47:33 np0005542546 kernel: devtmpfs: initialized
Dec  2 10:47:33 np0005542546 kernel: x86/mm: Memory block size: 128MB
Dec  2 10:47:33 np0005542546 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  2 10:47:33 np0005542546 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  2 10:47:33 np0005542546 kernel: pinctrl core: initialized pinctrl subsystem
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  2 10:47:33 np0005542546 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  2 10:47:33 np0005542546 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  2 10:47:33 np0005542546 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  2 10:47:33 np0005542546 kernel: audit: initializing netlink subsys (disabled)
Dec  2 10:47:33 np0005542546 kernel: audit: type=2000 audit(1764690451.796:1): state=initialized audit_enabled=0 res=1
Dec  2 10:47:33 np0005542546 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  2 10:47:33 np0005542546 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  2 10:47:33 np0005542546 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  2 10:47:33 np0005542546 kernel: cpuidle: using governor menu
Dec  2 10:47:33 np0005542546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  2 10:47:33 np0005542546 kernel: PCI: Using configuration type 1 for base access
Dec  2 10:47:33 np0005542546 kernel: PCI: Using configuration type 1 for extended access
Dec  2 10:47:33 np0005542546 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  2 10:47:33 np0005542546 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  2 10:47:33 np0005542546 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  2 10:47:33 np0005542546 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  2 10:47:33 np0005542546 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  2 10:47:33 np0005542546 kernel: Demotion targets for Node 0: null
Dec  2 10:47:33 np0005542546 kernel: cryptd: max_cpu_qlen set to 1000
Dec  2 10:47:33 np0005542546 kernel: ACPI: Added _OSI(Module Device)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Added _OSI(Processor Device)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  2 10:47:33 np0005542546 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  2 10:47:33 np0005542546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  2 10:47:33 np0005542546 kernel: ACPI: Interpreter enabled
Dec  2 10:47:33 np0005542546 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  2 10:47:33 np0005542546 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  2 10:47:33 np0005542546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  2 10:47:33 np0005542546 kernel: PCI: Using E820 reservations for host bridge windows
Dec  2 10:47:33 np0005542546 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  2 10:47:33 np0005542546 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [3] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [4] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [5] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [6] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [7] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [8] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [9] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [10] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [11] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [12] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [13] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [14] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [15] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [16] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [17] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [18] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [19] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [20] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [21] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [22] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [23] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [24] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [25] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [26] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [27] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [28] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [29] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [30] registered
Dec  2 10:47:33 np0005542546 kernel: acpiphp: Slot [31] registered
Dec  2 10:47:33 np0005542546 kernel: PCI host bridge to bus 0000:00
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  2 10:47:33 np0005542546 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  2 10:47:33 np0005542546 kernel: iommu: Default domain type: Translated
Dec  2 10:47:33 np0005542546 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  2 10:47:33 np0005542546 kernel: SCSI subsystem initialized
Dec  2 10:47:33 np0005542546 kernel: ACPI: bus type USB registered
Dec  2 10:47:33 np0005542546 kernel: usbcore: registered new interface driver usbfs
Dec  2 10:47:33 np0005542546 kernel: usbcore: registered new interface driver hub
Dec  2 10:47:33 np0005542546 kernel: usbcore: registered new device driver usb
Dec  2 10:47:33 np0005542546 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  2 10:47:33 np0005542546 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  2 10:47:33 np0005542546 kernel: PTP clock support registered
Dec  2 10:47:33 np0005542546 kernel: EDAC MC: Ver: 3.0.0
Dec  2 10:47:33 np0005542546 kernel: NetLabel: Initializing
Dec  2 10:47:33 np0005542546 kernel: NetLabel:  domain hash size = 128
Dec  2 10:47:33 np0005542546 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  2 10:47:33 np0005542546 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  2 10:47:33 np0005542546 kernel: PCI: Using ACPI for IRQ routing
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  2 10:47:33 np0005542546 kernel: vgaarb: loaded
Dec  2 10:47:33 np0005542546 kernel: clocksource: Switched to clocksource kvm-clock
Dec  2 10:47:33 np0005542546 kernel: VFS: Disk quotas dquot_6.6.0
Dec  2 10:47:33 np0005542546 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  2 10:47:33 np0005542546 kernel: pnp: PnP ACPI init
Dec  2 10:47:33 np0005542546 kernel: pnp: PnP ACPI: found 5 devices
Dec  2 10:47:33 np0005542546 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_INET protocol family
Dec  2 10:47:33 np0005542546 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  2 10:47:33 np0005542546 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_XDP protocol family
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  2 10:47:33 np0005542546 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  2 10:47:33 np0005542546 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  2 10:47:33 np0005542546 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72744 usecs
Dec  2 10:47:33 np0005542546 kernel: PCI: CLS 0 bytes, default 64
Dec  2 10:47:33 np0005542546 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  2 10:47:33 np0005542546 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  2 10:47:33 np0005542546 kernel: Trying to unpack rootfs image as initramfs...
Dec  2 10:47:33 np0005542546 kernel: ACPI: bus type thunderbolt registered
Dec  2 10:47:33 np0005542546 kernel: Initialise system trusted keyrings
Dec  2 10:47:33 np0005542546 kernel: Key type blacklist registered
Dec  2 10:47:33 np0005542546 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  2 10:47:33 np0005542546 kernel: zbud: loaded
Dec  2 10:47:33 np0005542546 kernel: integrity: Platform Keyring initialized
Dec  2 10:47:33 np0005542546 kernel: integrity: Machine keyring initialized
Dec  2 10:47:33 np0005542546 kernel: Freeing initrd memory: 77096K
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_ALG protocol family
Dec  2 10:47:33 np0005542546 kernel: xor: automatically using best checksumming function   avx       
Dec  2 10:47:33 np0005542546 kernel: Key type asymmetric registered
Dec  2 10:47:33 np0005542546 kernel: Asymmetric key parser 'x509' registered
Dec  2 10:47:33 np0005542546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  2 10:47:33 np0005542546 kernel: io scheduler mq-deadline registered
Dec  2 10:47:33 np0005542546 kernel: io scheduler kyber registered
Dec  2 10:47:33 np0005542546 kernel: io scheduler bfq registered
Dec  2 10:47:33 np0005542546 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  2 10:47:33 np0005542546 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  2 10:47:33 np0005542546 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  2 10:47:33 np0005542546 kernel: ACPI: button: Power Button [PWRF]
Dec  2 10:47:33 np0005542546 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  2 10:47:33 np0005542546 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  2 10:47:33 np0005542546 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  2 10:47:33 np0005542546 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  2 10:47:33 np0005542546 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  2 10:47:33 np0005542546 kernel: Non-volatile memory driver v1.3
Dec  2 10:47:33 np0005542546 kernel: rdac: device handler registered
Dec  2 10:47:33 np0005542546 kernel: hp_sw: device handler registered
Dec  2 10:47:33 np0005542546 kernel: emc: device handler registered
Dec  2 10:47:33 np0005542546 kernel: alua: device handler registered
Dec  2 10:47:33 np0005542546 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  2 10:47:33 np0005542546 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  2 10:47:33 np0005542546 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  2 10:47:33 np0005542546 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  2 10:47:33 np0005542546 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  2 10:47:33 np0005542546 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  2 10:47:33 np0005542546 kernel: usb usb1: Product: UHCI Host Controller
Dec  2 10:47:33 np0005542546 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  2 10:47:33 np0005542546 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  2 10:47:33 np0005542546 kernel: hub 1-0:1.0: USB hub found
Dec  2 10:47:33 np0005542546 kernel: hub 1-0:1.0: 2 ports detected
Dec  2 10:47:33 np0005542546 kernel: usbcore: registered new interface driver usbserial_generic
Dec  2 10:47:33 np0005542546 kernel: usbserial: USB Serial support registered for generic
Dec  2 10:47:33 np0005542546 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  2 10:47:33 np0005542546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  2 10:47:33 np0005542546 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  2 10:47:33 np0005542546 kernel: mousedev: PS/2 mouse device common for all mice
Dec  2 10:47:33 np0005542546 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  2 10:47:33 np0005542546 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  2 10:47:33 np0005542546 kernel: rtc_cmos 00:04: registered as rtc0
Dec  2 10:47:33 np0005542546 kernel: rtc_cmos 00:04: setting system clock to 2025-12-02T15:47:32 UTC (1764690452)
Dec  2 10:47:33 np0005542546 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  2 10:47:33 np0005542546 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  2 10:47:33 np0005542546 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  2 10:47:33 np0005542546 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  2 10:47:33 np0005542546 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  2 10:47:33 np0005542546 kernel: usbcore: registered new interface driver usbhid
Dec  2 10:47:33 np0005542546 kernel: usbhid: USB HID core driver
Dec  2 10:47:33 np0005542546 kernel: drop_monitor: Initializing network drop monitor service
Dec  2 10:47:33 np0005542546 kernel: Initializing XFRM netlink socket
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_INET6 protocol family
Dec  2 10:47:33 np0005542546 kernel: Segment Routing with IPv6
Dec  2 10:47:33 np0005542546 kernel: NET: Registered PF_PACKET protocol family
Dec  2 10:47:33 np0005542546 kernel: mpls_gso: MPLS GSO support
Dec  2 10:47:33 np0005542546 kernel: IPI shorthand broadcast: enabled
Dec  2 10:47:33 np0005542546 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  2 10:47:33 np0005542546 kernel: AES CTR mode by8 optimization enabled
Dec  2 10:47:33 np0005542546 kernel: sched_clock: Marking stable (1135002289, 153037150)->(1411509779, -123470340)
Dec  2 10:47:33 np0005542546 kernel: registered taskstats version 1
Dec  2 10:47:33 np0005542546 kernel: Loading compiled-in X.509 certificates
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  2 10:47:33 np0005542546 kernel: Demotion targets for Node 0: null
Dec  2 10:47:33 np0005542546 kernel: page_owner is disabled
Dec  2 10:47:33 np0005542546 kernel: Key type .fscrypt registered
Dec  2 10:47:33 np0005542546 kernel: Key type fscrypt-provisioning registered
Dec  2 10:47:33 np0005542546 kernel: Key type big_key registered
Dec  2 10:47:33 np0005542546 kernel: Key type encrypted registered
Dec  2 10:47:33 np0005542546 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  2 10:47:33 np0005542546 kernel: Loading compiled-in module X.509 certificates
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 10:47:33 np0005542546 kernel: ima: Allocated hash algorithm: sha256
Dec  2 10:47:33 np0005542546 kernel: ima: No architecture policies found
Dec  2 10:47:33 np0005542546 kernel: evm: Initialising EVM extended attributes:
Dec  2 10:47:33 np0005542546 kernel: evm: security.selinux
Dec  2 10:47:33 np0005542546 kernel: evm: security.SMACK64 (disabled)
Dec  2 10:47:33 np0005542546 kernel: evm: security.SMACK64EXEC (disabled)
Dec  2 10:47:33 np0005542546 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  2 10:47:33 np0005542546 kernel: evm: security.SMACK64MMAP (disabled)
Dec  2 10:47:33 np0005542546 kernel: evm: security.apparmor (disabled)
Dec  2 10:47:33 np0005542546 kernel: evm: security.ima
Dec  2 10:47:33 np0005542546 kernel: evm: security.capability
Dec  2 10:47:33 np0005542546 kernel: evm: HMAC attrs: 0x1
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  2 10:47:33 np0005542546 kernel: Running certificate verification RSA selftest
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  2 10:47:33 np0005542546 kernel: Running certificate verification ECDSA selftest
Dec  2 10:47:33 np0005542546 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  2 10:47:33 np0005542546 kernel: clk: Disabling unused clocks
Dec  2 10:47:33 np0005542546 kernel: Freeing unused decrypted memory: 2028K
Dec  2 10:47:33 np0005542546 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  2 10:47:33 np0005542546 kernel: Write protecting the kernel read-only data: 30720k
Dec  2 10:47:33 np0005542546 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  2 10:47:33 np0005542546 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  2 10:47:33 np0005542546 kernel: Run /init as init process
Dec  2 10:47:33 np0005542546 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 10:47:33 np0005542546 systemd: Detected virtualization kvm.
Dec  2 10:47:33 np0005542546 systemd: Detected architecture x86-64.
Dec  2 10:47:33 np0005542546 systemd: Running in initrd.
Dec  2 10:47:33 np0005542546 systemd: No hostname configured, using default hostname.
Dec  2 10:47:33 np0005542546 systemd: Hostname set to <localhost>.
Dec  2 10:47:33 np0005542546 systemd: Initializing machine ID from VM UUID.
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: Manufacturer: QEMU
Dec  2 10:47:33 np0005542546 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  2 10:47:33 np0005542546 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  2 10:47:33 np0005542546 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  2 10:47:33 np0005542546 systemd: Queued start job for default target Initrd Default Target.
Dec  2 10:47:33 np0005542546 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 10:47:33 np0005542546 systemd: Reached target Local Encrypted Volumes.
Dec  2 10:47:33 np0005542546 systemd: Reached target Initrd /usr File System.
Dec  2 10:47:33 np0005542546 systemd: Reached target Local File Systems.
Dec  2 10:47:33 np0005542546 systemd: Reached target Path Units.
Dec  2 10:47:33 np0005542546 systemd: Reached target Slice Units.
Dec  2 10:47:33 np0005542546 systemd: Reached target Swaps.
Dec  2 10:47:33 np0005542546 systemd: Reached target Timer Units.
Dec  2 10:47:33 np0005542546 systemd: Listening on D-Bus System Message Bus Socket.
Dec  2 10:47:33 np0005542546 systemd: Listening on Journal Socket (/dev/log).
Dec  2 10:47:33 np0005542546 systemd: Listening on Journal Socket.
Dec  2 10:47:33 np0005542546 systemd: Listening on udev Control Socket.
Dec  2 10:47:33 np0005542546 systemd: Listening on udev Kernel Socket.
Dec  2 10:47:33 np0005542546 systemd: Reached target Socket Units.
Dec  2 10:47:33 np0005542546 systemd: Starting Create List of Static Device Nodes...
Dec  2 10:47:33 np0005542546 systemd: Starting Journal Service...
Dec  2 10:47:33 np0005542546 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 10:47:33 np0005542546 systemd: Starting Apply Kernel Variables...
Dec  2 10:47:33 np0005542546 systemd: Starting Create System Users...
Dec  2 10:47:33 np0005542546 systemd: Starting Setup Virtual Console...
Dec  2 10:47:33 np0005542546 systemd: Finished Create List of Static Device Nodes.
Dec  2 10:47:33 np0005542546 systemd: Finished Apply Kernel Variables.
Dec  2 10:47:33 np0005542546 systemd: Finished Create System Users.
Dec  2 10:47:33 np0005542546 systemd-journald[306]: Journal started
Dec  2 10:47:33 np0005542546 systemd-journald[306]: Runtime Journal (/run/log/journal/e8b28829c1bb40ef87e781771a26068f) is 8.0M, max 153.6M, 145.6M free.
Dec  2 10:47:33 np0005542546 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  2 10:47:33 np0005542546 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  2 10:47:33 np0005542546 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  2 10:47:33 np0005542546 systemd: Started Journal Service.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 10:47:33 np0005542546 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 10:47:33 np0005542546 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 10:47:33 np0005542546 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 10:47:33 np0005542546 systemd[1]: Finished Setup Virtual Console.
Dec  2 10:47:33 np0005542546 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting dracut cmdline hook...
Dec  2 10:47:33 np0005542546 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Dec  2 10:47:33 np0005542546 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 10:47:33 np0005542546 systemd[1]: Finished dracut cmdline hook.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting dracut pre-udev hook...
Dec  2 10:47:33 np0005542546 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  2 10:47:33 np0005542546 kernel: device-mapper: uevent: version 1.0.3
Dec  2 10:47:33 np0005542546 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  2 10:47:33 np0005542546 kernel: RPC: Registered named UNIX socket transport module.
Dec  2 10:47:33 np0005542546 kernel: RPC: Registered udp transport module.
Dec  2 10:47:33 np0005542546 kernel: RPC: Registered tcp transport module.
Dec  2 10:47:33 np0005542546 kernel: RPC: Registered tcp-with-tls transport module.
Dec  2 10:47:33 np0005542546 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  2 10:47:33 np0005542546 rpc.statd[444]: Version 2.5.4 starting
Dec  2 10:47:33 np0005542546 rpc.statd[444]: Initializing NSM state
Dec  2 10:47:33 np0005542546 rpc.idmapd[449]: Setting log level to 0
Dec  2 10:47:33 np0005542546 systemd[1]: Finished dracut pre-udev hook.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 10:47:33 np0005542546 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 10:47:33 np0005542546 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting dracut pre-trigger hook...
Dec  2 10:47:33 np0005542546 systemd[1]: Finished dracut pre-trigger hook.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting Coldplug All udev Devices...
Dec  2 10:47:33 np0005542546 systemd[1]: Created slice Slice /system/modprobe.
Dec  2 10:47:33 np0005542546 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 10:47:33 np0005542546 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 10:47:33 np0005542546 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 10:47:33 np0005542546 systemd[1]: Reached target Network.
Dec  2 10:47:33 np0005542546 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 10:47:33 np0005542546 systemd[1]: Starting dracut initqueue hook...
Dec  2 10:47:33 np0005542546 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 10:47:33 np0005542546 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 10:47:33 np0005542546 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  2 10:47:33 np0005542546 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  2 10:47:33 np0005542546 kernel: vda: vda1
Dec  2 10:47:33 np0005542546 systemd-udevd[497]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 10:47:33 np0005542546 kernel: scsi host0: ata_piix
Dec  2 10:47:33 np0005542546 kernel: scsi host1: ata_piix
Dec  2 10:47:33 np0005542546 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  2 10:47:33 np0005542546 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  2 10:47:33 np0005542546 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  2 10:47:33 np0005542546 systemd[1]: Reached target Initrd Root Device.
Dec  2 10:47:34 np0005542546 systemd[1]: Mounting Kernel Configuration File System...
Dec  2 10:47:34 np0005542546 kernel: ata1: found unknown device (class 0)
Dec  2 10:47:34 np0005542546 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  2 10:47:34 np0005542546 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  2 10:47:34 np0005542546 systemd[1]: Mounted Kernel Configuration File System.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target System Initialization.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Basic System.
Dec  2 10:47:34 np0005542546 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  2 10:47:34 np0005542546 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  2 10:47:34 np0005542546 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  2 10:47:34 np0005542546 systemd[1]: Finished dracut initqueue hook.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Remote File Systems.
Dec  2 10:47:34 np0005542546 systemd[1]: Starting dracut pre-mount hook...
Dec  2 10:47:34 np0005542546 systemd[1]: Finished dracut pre-mount hook.
Dec  2 10:47:34 np0005542546 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  2 10:47:34 np0005542546 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec  2 10:47:34 np0005542546 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  2 10:47:34 np0005542546 systemd[1]: Mounting /sysroot...
Dec  2 10:47:34 np0005542546 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  2 10:47:34 np0005542546 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  2 10:47:34 np0005542546 kernel: XFS (vda1): Ending clean mount
Dec  2 10:47:34 np0005542546 systemd[1]: Mounted /sysroot.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Initrd Root File System.
Dec  2 10:47:34 np0005542546 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  2 10:47:34 np0005542546 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  2 10:47:34 np0005542546 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Initrd File Systems.
Dec  2 10:47:34 np0005542546 systemd[1]: Reached target Initrd Default Target.
Dec  2 10:47:34 np0005542546 systemd[1]: Starting dracut mount hook...
Dec  2 10:47:34 np0005542546 systemd[1]: Finished dracut mount hook.
Dec  2 10:47:34 np0005542546 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  2 10:47:34 np0005542546 rpc.idmapd[449]: exiting on signal 15
Dec  2 10:47:35 np0005542546 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Network.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Timer Units.
Dec  2 10:47:35 np0005542546 systemd[1]: dbus.socket: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Initrd Default Target.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Basic System.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Initrd Root Device.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Initrd /usr File System.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Path Units.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Remote File Systems.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Slice Units.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Socket Units.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target System Initialization.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Local File Systems.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Swaps.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut mount hook.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut pre-mount hook.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut initqueue hook.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Coldplug All udev Devices.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut pre-trigger hook.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Setup Virtual Console.
Dec  2 10:47:35 np0005542546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Closed udev Control Socket.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Closed udev Kernel Socket.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut pre-udev hook.
Dec  2 10:47:35 np0005542546 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped dracut cmdline hook.
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Cleanup udev Database...
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  2 10:47:35 np0005542546 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Stopped Create System Users.
Dec  2 10:47:35 np0005542546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  2 10:47:35 np0005542546 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Cleanup udev Database.
Dec  2 10:47:35 np0005542546 systemd[1]: Reached target Switch Root.
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Switch Root...
Dec  2 10:47:35 np0005542546 systemd[1]: Switching root.
Dec  2 10:47:35 np0005542546 systemd-journald[306]: Journal stopped
Dec  2 10:47:35 np0005542546 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  2 10:47:35 np0005542546 kernel: audit: type=1404 audit(1764690455.317:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 10:47:35 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 10:47:35 np0005542546 kernel: audit: type=1403 audit(1764690455.458:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  2 10:47:35 np0005542546 systemd: Successfully loaded SELinux policy in 144.763ms.
Dec  2 10:47:35 np0005542546 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.394ms.
Dec  2 10:47:35 np0005542546 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 10:47:35 np0005542546 systemd: Detected virtualization kvm.
Dec  2 10:47:35 np0005542546 systemd: Detected architecture x86-64.
Dec  2 10:47:35 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 10:47:35 np0005542546 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd: Stopped Switch Root.
Dec  2 10:47:35 np0005542546 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  2 10:47:35 np0005542546 systemd: Created slice Slice /system/getty.
Dec  2 10:47:35 np0005542546 systemd: Created slice Slice /system/serial-getty.
Dec  2 10:47:35 np0005542546 systemd: Created slice Slice /system/sshd-keygen.
Dec  2 10:47:35 np0005542546 systemd: Created slice User and Session Slice.
Dec  2 10:47:35 np0005542546 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 10:47:35 np0005542546 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  2 10:47:35 np0005542546 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  2 10:47:35 np0005542546 systemd: Reached target Local Encrypted Volumes.
Dec  2 10:47:35 np0005542546 systemd: Stopped target Switch Root.
Dec  2 10:47:35 np0005542546 systemd: Stopped target Initrd File Systems.
Dec  2 10:47:35 np0005542546 systemd: Stopped target Initrd Root File System.
Dec  2 10:47:35 np0005542546 systemd: Reached target Local Integrity Protected Volumes.
Dec  2 10:47:35 np0005542546 systemd: Reached target Path Units.
Dec  2 10:47:35 np0005542546 systemd: Reached target rpc_pipefs.target.
Dec  2 10:47:35 np0005542546 systemd: Reached target Slice Units.
Dec  2 10:47:35 np0005542546 systemd: Reached target Swaps.
Dec  2 10:47:35 np0005542546 systemd: Reached target Local Verity Protected Volumes.
Dec  2 10:47:35 np0005542546 systemd: Listening on RPCbind Server Activation Socket.
Dec  2 10:47:35 np0005542546 systemd: Reached target RPC Port Mapper.
Dec  2 10:47:35 np0005542546 systemd: Listening on Process Core Dump Socket.
Dec  2 10:47:35 np0005542546 systemd: Listening on initctl Compatibility Named Pipe.
Dec  2 10:47:35 np0005542546 systemd: Listening on udev Control Socket.
Dec  2 10:47:35 np0005542546 systemd: Listening on udev Kernel Socket.
Dec  2 10:47:35 np0005542546 systemd: Mounting Huge Pages File System...
Dec  2 10:47:35 np0005542546 systemd: Mounting POSIX Message Queue File System...
Dec  2 10:47:35 np0005542546 systemd: Mounting Kernel Debug File System...
Dec  2 10:47:35 np0005542546 systemd: Mounting Kernel Trace File System...
Dec  2 10:47:35 np0005542546 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 10:47:35 np0005542546 systemd: Starting Create List of Static Device Nodes...
Dec  2 10:47:35 np0005542546 systemd: Starting Load Kernel Module configfs...
Dec  2 10:47:35 np0005542546 systemd: Starting Load Kernel Module drm...
Dec  2 10:47:35 np0005542546 systemd: Starting Load Kernel Module efi_pstore...
Dec  2 10:47:35 np0005542546 systemd: Starting Load Kernel Module fuse...
Dec  2 10:47:35 np0005542546 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  2 10:47:35 np0005542546 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd: Stopped File System Check on Root Device.
Dec  2 10:47:35 np0005542546 systemd: Stopped Journal Service.
Dec  2 10:47:35 np0005542546 systemd: Starting Journal Service...
Dec  2 10:47:35 np0005542546 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 10:47:35 np0005542546 systemd: Starting Generate network units from Kernel command line...
Dec  2 10:47:35 np0005542546 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 10:47:35 np0005542546 systemd: Starting Remount Root and Kernel File Systems...
Dec  2 10:47:35 np0005542546 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  2 10:47:35 np0005542546 systemd: Starting Apply Kernel Variables...
Dec  2 10:47:35 np0005542546 kernel: fuse: init (API version 7.37)
Dec  2 10:47:35 np0005542546 systemd: Starting Coldplug All udev Devices...
Dec  2 10:47:35 np0005542546 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  2 10:47:35 np0005542546 systemd: Mounted Huge Pages File System.
Dec  2 10:47:35 np0005542546 systemd-journald[679]: Journal started
Dec  2 10:47:35 np0005542546 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  2 10:47:35 np0005542546 systemd[1]: Queued start job for default target Multi-User System.
Dec  2 10:47:35 np0005542546 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd: Started Journal Service.
Dec  2 10:47:35 np0005542546 systemd[1]: Mounted POSIX Message Queue File System.
Dec  2 10:47:35 np0005542546 systemd[1]: Mounted Kernel Debug File System.
Dec  2 10:47:35 np0005542546 systemd[1]: Mounted Kernel Trace File System.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Create List of Static Device Nodes.
Dec  2 10:47:35 np0005542546 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 10:47:35 np0005542546 kernel: ACPI: bus type drm_connector registered
Dec  2 10:47:35 np0005542546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  2 10:47:35 np0005542546 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Load Kernel Module drm.
Dec  2 10:47:35 np0005542546 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Load Kernel Module fuse.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Generate network units from Kernel command line.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Apply Kernel Variables.
Dec  2 10:47:35 np0005542546 systemd[1]: Mounting FUSE Control File System...
Dec  2 10:47:35 np0005542546 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Rebuild Hardware Database...
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  2 10:47:35 np0005542546 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Load/Save OS Random Seed...
Dec  2 10:47:35 np0005542546 systemd[1]: Starting Create System Users...
Dec  2 10:47:35 np0005542546 systemd[1]: Mounted FUSE Control File System.
Dec  2 10:47:35 np0005542546 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  2 10:47:35 np0005542546 systemd-journald[679]: Received client request to flush runtime journal.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Load/Save OS Random Seed.
Dec  2 10:47:35 np0005542546 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 10:47:35 np0005542546 systemd[1]: Finished Create System Users.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target Preparation for Local File Systems.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target Local File Systems.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  2 10:47:36 np0005542546 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  2 10:47:36 np0005542546 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  2 10:47:36 np0005542546 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Automatic Boot Loader Update...
Dec  2 10:47:36 np0005542546 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 10:47:36 np0005542546 bootctl[695]: Couldn't find EFI system partition, skipping.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Automatic Boot Loader Update.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Security Auditing Service...
Dec  2 10:47:36 np0005542546 systemd[1]: Starting RPC Bind...
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Rebuild Journal Catalog...
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  2 10:47:36 np0005542546 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  2 10:47:36 np0005542546 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Rebuild Journal Catalog.
Dec  2 10:47:36 np0005542546 systemd[1]: Started RPC Bind.
Dec  2 10:47:36 np0005542546 augenrules[706]: /sbin/augenrules: No change
Dec  2 10:47:36 np0005542546 augenrules[721]: No rules
Dec  2 10:47:36 np0005542546 augenrules[721]: enabled 1
Dec  2 10:47:36 np0005542546 augenrules[721]: failure 1
Dec  2 10:47:36 np0005542546 augenrules[721]: pid 701
Dec  2 10:47:36 np0005542546 augenrules[721]: rate_limit 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_limit 8192
Dec  2 10:47:36 np0005542546 augenrules[721]: lost 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog 2
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time 60000
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time_actual 0
Dec  2 10:47:36 np0005542546 augenrules[721]: enabled 1
Dec  2 10:47:36 np0005542546 augenrules[721]: failure 1
Dec  2 10:47:36 np0005542546 augenrules[721]: pid 701
Dec  2 10:47:36 np0005542546 augenrules[721]: rate_limit 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_limit 8192
Dec  2 10:47:36 np0005542546 augenrules[721]: lost 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog 2
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time 60000
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time_actual 0
Dec  2 10:47:36 np0005542546 augenrules[721]: enabled 1
Dec  2 10:47:36 np0005542546 augenrules[721]: failure 1
Dec  2 10:47:36 np0005542546 augenrules[721]: pid 701
Dec  2 10:47:36 np0005542546 augenrules[721]: rate_limit 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_limit 8192
Dec  2 10:47:36 np0005542546 augenrules[721]: lost 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog 0
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time 60000
Dec  2 10:47:36 np0005542546 augenrules[721]: backlog_wait_time_actual 0
Dec  2 10:47:36 np0005542546 systemd[1]: Started Security Auditing Service.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Rebuild Hardware Database.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Update is Completed...
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Update is Completed.
Dec  2 10:47:36 np0005542546 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 10:47:36 np0005542546 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target System Initialization.
Dec  2 10:47:36 np0005542546 systemd[1]: Started dnf makecache --timer.
Dec  2 10:47:36 np0005542546 systemd[1]: Started Daily rotation of log files.
Dec  2 10:47:36 np0005542546 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target Timer Units.
Dec  2 10:47:36 np0005542546 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  2 10:47:36 np0005542546 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target Socket Units.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting D-Bus System Message Bus...
Dec  2 10:47:36 np0005542546 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 10:47:36 np0005542546 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  2 10:47:36 np0005542546 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 10:47:36 np0005542546 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 10:47:36 np0005542546 systemd[1]: Started D-Bus System Message Bus.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target Basic System.
Dec  2 10:47:36 np0005542546 dbus-broker-lau[753]: Ready
Dec  2 10:47:36 np0005542546 systemd[1]: Starting NTP client/server...
Dec  2 10:47:36 np0005542546 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  2 10:47:36 np0005542546 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  2 10:47:36 np0005542546 systemd[1]: Starting IPv4 firewall with iptables...
Dec  2 10:47:36 np0005542546 systemd[1]: Started irqbalance daemon.
Dec  2 10:47:36 np0005542546 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  2 10:47:36 np0005542546 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 10:47:36 np0005542546 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 10:47:36 np0005542546 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target sshd-keygen.target.
Dec  2 10:47:36 np0005542546 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  2 10:47:36 np0005542546 systemd[1]: Reached target User and Group Name Lookups.
Dec  2 10:47:36 np0005542546 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  2 10:47:36 np0005542546 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  2 10:47:36 np0005542546 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  2 10:47:36 np0005542546 chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 10:47:36 np0005542546 systemd[1]: Starting User Login Management...
Dec  2 10:47:36 np0005542546 chronyd[793]: Loaded 0 symmetric keys
Dec  2 10:47:36 np0005542546 chronyd[793]: Using right/UTC timezone to obtain leap second data
Dec  2 10:47:36 np0005542546 chronyd[793]: Loaded seccomp filter (level 2)
Dec  2 10:47:36 np0005542546 systemd[1]: Started NTP client/server.
Dec  2 10:47:36 np0005542546 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  2 10:47:36 np0005542546 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  2 10:47:36 np0005542546 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  2 10:47:36 np0005542546 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  2 10:47:36 np0005542546 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  2 10:47:36 np0005542546 systemd-logind[790]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  2 10:47:36 np0005542546 systemd-logind[790]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  2 10:47:36 np0005542546 kernel: Console: switching to colour dummy device 80x25
Dec  2 10:47:36 np0005542546 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  2 10:47:36 np0005542546 kernel: [drm] features: -context_init
Dec  2 10:47:36 np0005542546 systemd-logind[790]: New seat seat0.
Dec  2 10:47:36 np0005542546 systemd[1]: Started User Login Management.
Dec  2 10:47:36 np0005542546 kernel: [drm] number of scanouts: 1
Dec  2 10:47:36 np0005542546 kernel: [drm] number of cap sets: 0
Dec  2 10:47:36 np0005542546 kernel: kvm_amd: TSC scaling supported
Dec  2 10:47:36 np0005542546 kernel: kvm_amd: Nested Virtualization enabled
Dec  2 10:47:36 np0005542546 kernel: kvm_amd: Nested Paging enabled
Dec  2 10:47:36 np0005542546 kernel: kvm_amd: LBR virtualization supported
Dec  2 10:47:36 np0005542546 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  2 10:47:36 np0005542546 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  2 10:47:36 np0005542546 kernel: Console: switching to colour frame buffer device 128x48
Dec  2 10:47:36 np0005542546 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  2 10:47:37 np0005542546 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Dec  2 10:47:37 np0005542546 systemd[1]: Finished IPv4 firewall with iptables.
Dec  2 10:47:37 np0005542546 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 02 Dec 2025 15:47:37 +0000. Up 5.79 seconds.
Dec  2 10:47:37 np0005542546 systemd[1]: run-cloud\x2dinit-tmp-tmpakrcwj3u.mount: Deactivated successfully.
Dec  2 10:47:37 np0005542546 systemd[1]: Starting Hostname Service...
Dec  2 10:47:37 np0005542546 systemd[1]: Started Hostname Service.
Dec  2 10:47:37 np0005542546 systemd-hostnamed[851]: Hostname set to <np0005542546.novalocal> (static)
Dec  2 10:47:37 np0005542546 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  2 10:47:37 np0005542546 systemd[1]: Reached target Preparation for Network.
Dec  2 10:47:37 np0005542546 systemd[1]: Starting Network Manager...
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6657] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4bc2fc38-bd6d-4040-a7f5-cb188f94ca47)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6661] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6724] manager[0x564f8125a080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6765] hostname: hostname: using hostnamed
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6766] hostname: static hostname changed from (none) to "np0005542546.novalocal"
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6770] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6874] manager[0x564f8125a080]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6875] manager[0x564f8125a080]: rfkill: WWAN hardware radio set enabled
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6912] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6912] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6913] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6914] manager: Networking is enabled by state file
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6916] settings: Loaded settings plugin: keyfile (internal)
Dec  2 10:47:37 np0005542546 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6924] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6944] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6957] dhcp: init: Using DHCP client 'internal'
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6960] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6974] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6982] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.6991] device (lo): Activation: starting connection 'lo' (3326fae0-34b6-4b68-8a72-7a7ca30af2b3)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7000] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7003] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7030] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7034] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7037] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7039] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7040] device (eth0): carrier: link connected
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7044] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7050] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7055] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7058] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7059] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7061] manager: NetworkManager state is now CONNECTING
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7062] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7068] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7071] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:47:37 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 10:47:37 np0005542546 systemd[1]: Started Network Manager.
Dec  2 10:47:37 np0005542546 systemd[1]: Reached target Network.
Dec  2 10:47:37 np0005542546 systemd[1]: Starting Network Manager Wait Online...
Dec  2 10:47:37 np0005542546 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  2 10:47:37 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7305] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7308] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 10:47:37 np0005542546 NetworkManager[855]: <info>  [1764690457.7316] device (lo): Activation: successful, device activated.
Dec  2 10:47:37 np0005542546 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  2 10:47:37 np0005542546 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 10:47:37 np0005542546 systemd[1]: Reached target NFS client services.
Dec  2 10:47:37 np0005542546 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 10:47:37 np0005542546 systemd[1]: Reached target Remote File Systems.
Dec  2 10:47:37 np0005542546 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5840] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5854] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5878] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5920] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5922] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5926] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5930] device (eth0): Activation: successful, device activated.
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5936] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 10:47:39 np0005542546 NetworkManager[855]: <info>  [1764690459.5940] manager: startup complete
Dec  2 10:47:39 np0005542546 systemd[1]: Finished Network Manager Wait Online.
Dec  2 10:47:39 np0005542546 systemd[1]: Starting Cloud-init: Network Stage...
Dec  2 10:47:39 np0005542546 cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 02 Dec 2025 15:47:39 +0000. Up 8.48 seconds.
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.151        | 255.255.255.0 | global | fa:16:3e:be:01:cd |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:febe:1cd/64 |       .       |  link  | fa:16:3e:be:01:cd |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  2 10:47:39 np0005542546 cloud-init[918]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec  2 10:47:40 np0005542546 cloud-init[918]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec  2 10:47:40 np0005542546 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 10:47:41 np0005542546 cloud-init[918]: Generating public/private rsa key pair.
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key fingerprint is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: SHA256:iIvnvNeE06lZNJLvwU00GdDiJVv33sK+7eOFkis2RYg root@np0005542546.novalocal
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key's randomart image is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: +---[RSA 3072]----+
Dec  2 10:47:41 np0005542546 cloud-init[918]: |        .o.o     |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |        o B .    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |       o O + .   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |     .o.E o . .  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |    . .BS= . o . |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |   . .o O . ..+..|
Dec  2 10:47:41 np0005542546 cloud-init[918]: |  . o  O . .o....|
Dec  2 10:47:41 np0005542546 cloud-init[918]: |   +  + o +  o.o.|
Dec  2 10:47:41 np0005542546 cloud-init[918]: |    +o   . o. o++|
Dec  2 10:47:41 np0005542546 cloud-init[918]: +----[SHA256]-----+
Dec  2 10:47:41 np0005542546 cloud-init[918]: Generating public/private ecdsa key pair.
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key fingerprint is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: SHA256:QRVtVq6GLYivXzG4arEsIlPS/+PJw7tv23LzHlibGto root@np0005542546.novalocal
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key's randomart image is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: +---[ECDSA 256]---+
Dec  2 10:47:41 np0005542546 cloud-init[918]: |        ..oo ..  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |       .    +.   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |        .  o  .  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |       . + o .   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: | .    . S = =    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |. o   .. . B o   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: | o . o oo + +    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |o . o.B+o=oo .   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: | o . =XO=+E+o    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: +----[SHA256]-----+
Dec  2 10:47:41 np0005542546 cloud-init[918]: Generating public/private ed25519 key pair.
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  2 10:47:41 np0005542546 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key fingerprint is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: SHA256:wkR5az1QtK4HvLcntkWJGWelp2gMqd3D8inz7PzcDpw root@np0005542546.novalocal
Dec  2 10:47:41 np0005542546 cloud-init[918]: The key's randomart image is:
Dec  2 10:47:41 np0005542546 cloud-init[918]: +--[ED25519 256]--+
Dec  2 10:47:41 np0005542546 cloud-init[918]: |      .. oo   .  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |     .. o. . o   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |      ..o+o + .  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |     o +o*oB +   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |      +.S @.+    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |       . B = .   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |        = = E    |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |         Oo+.o   |
Dec  2 10:47:41 np0005542546 cloud-init[918]: |         oB=o.o  |
Dec  2 10:47:41 np0005542546 cloud-init[918]: +----[SHA256]-----+
Dec  2 10:47:41 np0005542546 systemd[1]: Finished Cloud-init: Network Stage.
Dec  2 10:47:41 np0005542546 systemd[1]: Reached target Cloud-config availability.
Dec  2 10:47:41 np0005542546 systemd[1]: Reached target Network is Online.
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Cloud-init: Config Stage...
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Crash recovery kernel arming...
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Notify NFS peers of a restart...
Dec  2 10:47:41 np0005542546 systemd[1]: Starting System Logging Service...
Dec  2 10:47:41 np0005542546 systemd[1]: Starting OpenSSH server daemon...
Dec  2 10:47:41 np0005542546 sm-notify[1003]: Version 2.5.4 starting
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Permit User Sessions...
Dec  2 10:47:41 np0005542546 systemd[1]: Started Notify NFS peers of a restart.
Dec  2 10:47:41 np0005542546 systemd[1]: Started OpenSSH server daemon.
Dec  2 10:47:41 np0005542546 systemd[1]: Finished Permit User Sessions.
Dec  2 10:47:41 np0005542546 systemd[1]: Started Command Scheduler.
Dec  2 10:47:41 np0005542546 systemd[1]: Started Getty on tty1.
Dec  2 10:47:41 np0005542546 systemd[1]: Started Serial Getty on ttyS0.
Dec  2 10:47:41 np0005542546 systemd[1]: Reached target Login Prompts.
Dec  2 10:47:41 np0005542546 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec  2 10:47:41 np0005542546 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  2 10:47:41 np0005542546 systemd[1]: Started System Logging Service.
Dec  2 10:47:41 np0005542546 systemd[1]: Reached target Multi-User System.
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  2 10:47:41 np0005542546 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  2 10:47:41 np0005542546 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  2 10:47:41 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 10:47:41 np0005542546 kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Dec  2 10:47:41 np0005542546 kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  2 10:47:41 np0005542546 cloud-init[1136]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 02 Dec 2025 15:47:41 +0000. Up 10.20 seconds.
Dec  2 10:47:41 np0005542546 systemd[1]: Finished Cloud-init: Config Stage.
Dec  2 10:47:41 np0005542546 systemd[1]: Starting Cloud-init: Final Stage...
Dec  2 10:47:41 np0005542546 dracut[1264]: dracut-057-102.git20250818.el9
Dec  2 10:47:42 np0005542546 cloud-init[1282]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 02 Dec 2025 15:47:42 +0000. Up 10.61 seconds.
Dec  2 10:47:42 np0005542546 cloud-init[1290]: #############################################################
Dec  2 10:47:42 np0005542546 cloud-init[1295]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  2 10:47:42 np0005542546 dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  2 10:47:42 np0005542546 cloud-init[1301]: 256 SHA256:QRVtVq6GLYivXzG4arEsIlPS/+PJw7tv23LzHlibGto root@np0005542546.novalocal (ECDSA)
Dec  2 10:47:42 np0005542546 cloud-init[1309]: 256 SHA256:wkR5az1QtK4HvLcntkWJGWelp2gMqd3D8inz7PzcDpw root@np0005542546.novalocal (ED25519)
Dec  2 10:47:42 np0005542546 cloud-init[1314]: 3072 SHA256:iIvnvNeE06lZNJLvwU00GdDiJVv33sK+7eOFkis2RYg root@np0005542546.novalocal (RSA)
Dec  2 10:47:42 np0005542546 cloud-init[1317]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  2 10:47:42 np0005542546 cloud-init[1320]: #############################################################
Dec  2 10:47:42 np0005542546 cloud-init[1282]: Cloud-init v. 24.4-7.el9 finished at Tue, 02 Dec 2025 15:47:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.79 seconds
Dec  2 10:47:42 np0005542546 systemd[1]: Finished Cloud-init: Final Stage.
Dec  2 10:47:42 np0005542546 systemd[1]: Reached target Cloud-init target.
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 10:47:42 np0005542546 dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: memstrack is not available
Dec  2 10:47:43 np0005542546 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 10:47:43 np0005542546 dracut[1266]: memstrack is not available
Dec  2 10:47:43 np0005542546 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 10:47:43 np0005542546 dracut[1266]: *** Including module: systemd ***
Dec  2 10:47:43 np0005542546 dracut[1266]: *** Including module: fips ***
Dec  2 10:47:43 np0005542546 dracut[1266]: *** Including module: systemd-initrd ***
Dec  2 10:47:43 np0005542546 dracut[1266]: *** Including module: i18n ***
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: drm ***
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: prefixdevname ***
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: kernel-modules ***
Dec  2 10:47:44 np0005542546 chronyd[793]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Dec  2 10:47:44 np0005542546 chronyd[793]: System clock TAI offset set to 37 seconds
Dec  2 10:47:44 np0005542546 kernel: block vda: the capability attribute has been deprecated.
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: kernel-modules-extra ***
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: qemu ***
Dec  2 10:47:44 np0005542546 dracut[1266]: *** Including module: fstab-sys ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: rootfs-block ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: terminfo ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: udev-rules ***
Dec  2 10:47:45 np0005542546 dracut[1266]: Skipping udev rule: 91-permissions.rules
Dec  2 10:47:45 np0005542546 dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: virtiofs ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: dracut-systemd ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: usrmount ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: base ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: fs-lib ***
Dec  2 10:47:45 np0005542546 dracut[1266]: *** Including module: kdumpbase ***
Dec  2 10:47:46 np0005542546 dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  2 10:47:46 np0005542546 dracut[1266]:  microcode_ctl module: mangling fw_dir
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  2 10:47:46 np0005542546 dracut[1266]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  2 10:47:46 np0005542546 dracut[1266]: *** Including module: openssl ***
Dec  2 10:47:47 np0005542546 dracut[1266]: *** Including module: shutdown ***
Dec  2 10:47:47 np0005542546 dracut[1266]: *** Including module: squash ***
Dec  2 10:47:47 np0005542546 dracut[1266]: *** Including modules done ***
Dec  2 10:47:47 np0005542546 dracut[1266]: *** Installing kernel module dependencies ***
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 25 affinity is now unmanaged
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 31 affinity is now unmanaged
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 28 affinity is now unmanaged
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 32 affinity is now unmanaged
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 30 affinity is now unmanaged
Dec  2 10:47:47 np0005542546 irqbalance[785]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  2 10:47:47 np0005542546 irqbalance[785]: IRQ 29 affinity is now unmanaged
Dec  2 10:47:48 np0005542546 dracut[1266]: *** Installing kernel module dependencies done ***
Dec  2 10:47:48 np0005542546 dracut[1266]: *** Resolving executable dependencies ***
Dec  2 10:47:49 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 10:47:49 np0005542546 dracut[1266]: *** Resolving executable dependencies done ***
Dec  2 10:47:49 np0005542546 dracut[1266]: *** Generating early-microcode cpio image ***
Dec  2 10:47:49 np0005542546 dracut[1266]: *** Store current command line parameters ***
Dec  2 10:47:49 np0005542546 dracut[1266]: Stored kernel commandline:
Dec  2 10:47:49 np0005542546 dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Dec  2 10:47:50 np0005542546 dracut[1266]: *** Install squash loader ***
Dec  2 10:47:51 np0005542546 dracut[1266]: *** Squashing the files inside the initramfs ***
Dec  2 10:47:52 np0005542546 dracut[1266]: *** Squashing the files inside the initramfs done ***
Dec  2 10:47:52 np0005542546 dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  2 10:47:52 np0005542546 dracut[1266]: *** Hardlinking files ***
Dec  2 10:47:52 np0005542546 dracut[1266]: *** Hardlinking files done ***
Dec  2 10:47:52 np0005542546 dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  2 10:47:53 np0005542546 kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Dec  2 10:47:53 np0005542546 kdumpctl[1013]: kdump: Starting kdump: [OK]
Dec  2 10:47:53 np0005542546 systemd[1]: Finished Crash recovery kernel arming.
Dec  2 10:47:53 np0005542546 systemd[1]: Startup finished in 1.455s (kernel) + 2.454s (initrd) + 17.807s (userspace) = 21.718s.
Dec  2 10:48:05 np0005542546 systemd[1]: Created slice User Slice of UID 1000.
Dec  2 10:48:05 np0005542546 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  2 10:48:05 np0005542546 systemd-logind[790]: New session 1 of user zuul.
Dec  2 10:48:05 np0005542546 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  2 10:48:05 np0005542546 systemd[1]: Starting User Manager for UID 1000...
Dec  2 10:48:05 np0005542546 systemd[4299]: Queued start job for default target Main User Target.
Dec  2 10:48:05 np0005542546 systemd[4299]: Created slice User Application Slice.
Dec  2 10:48:05 np0005542546 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  2 10:48:05 np0005542546 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 10:48:05 np0005542546 systemd[4299]: Reached target Paths.
Dec  2 10:48:05 np0005542546 systemd[4299]: Reached target Timers.
Dec  2 10:48:05 np0005542546 systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec  2 10:48:05 np0005542546 systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec  2 10:48:05 np0005542546 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec  2 10:48:05 np0005542546 systemd[4299]: Reached target Sockets.
Dec  2 10:48:05 np0005542546 systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec  2 10:48:05 np0005542546 systemd[4299]: Reached target Basic System.
Dec  2 10:48:05 np0005542546 systemd[4299]: Reached target Main User Target.
Dec  2 10:48:05 np0005542546 systemd[4299]: Startup finished in 136ms.
Dec  2 10:48:05 np0005542546 systemd[1]: Started User Manager for UID 1000.
Dec  2 10:48:05 np0005542546 systemd[1]: Started Session 1 of User zuul.
Dec  2 10:48:05 np0005542546 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 10:48:07 np0005542546 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 10:48:08 np0005542546 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 10:48:15 np0005542546 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 10:48:15 np0005542546 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  2 10:48:17 np0005542546 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWXsyem/AFaGLVSzc7n22gUzeef5khUKOy9TIsGKElKxVx9iB9r12Jo1k2XbPrPUhq8MSso/12OIZCYG+bd4jL53Of/niscmDSJ5pI7maVEZxJnvnfGEJ1xPT9T07BGMpafWOvKgHTeZJjA9zOq++VCylsYcaucUe61FtRmkP8xJAnArIy5X8fPpMVra5dsyddDY9lN/9mfwrxXhrOsEuH25wnbBgDYhfG1xwR5W5TTE/w9EALNOfMM8Npt5c6PU4tXt0X6LxoFd2s4aSi9fn2UI898He9kkCiaKFIOaP+RRkTiyCeIkhI0YF6cpH+WyqMiGKL/GIWiZLbF4OHNpnnlZL0StAqZR4LkmZrRU08VjQbohn6dhtzbvXgJOsE0ydW8BBkt/Bjm4lMfx+FOdqX8snjklwNAOHHK2uvqRf/qZGKdNbjTfWdvdhdEOg80EbxOV4gHMpRaUkdn/ENOyopytYNYFg8Bdu0cn4trU64wVPOKw7qBgBDUXkkCt8wIhk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:18 np0005542546 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:18 np0005542546 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:19 np0005542546 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764690498.4341528-207-162966215369064/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=189ea20e328e4cd883b54d4ca6d6495d_id_rsa follow=False checksum=e57812f70d79f2317decfdfa6157c8291e30d279 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:19 np0005542546 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:20 np0005542546 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764690499.7150254-240-236129466175980/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=189ea20e328e4cd883b54d4ca6d6495d_id_rsa.pub follow=False checksum=c055e19374f4894c7a1a248c519e99654c4e19ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:21 np0005542546 python3[4971]: ansible-ping Invoked with data=pong
Dec  2 10:48:22 np0005542546 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 10:48:24 np0005542546 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  2 10:48:25 np0005542546 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:25 np0005542546 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:25 np0005542546 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:26 np0005542546 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:26 np0005542546 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:26 np0005542546 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:28 np0005542546 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:28 np0005542546 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:29 np0005542546 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764690508.3706489-21-158107587610335/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:29 np0005542546 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:30 np0005542546 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:30 np0005542546 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:30 np0005542546 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:30 np0005542546 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:31 np0005542546 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:31 np0005542546 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:31 np0005542546 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:31 np0005542546 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:32 np0005542546 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:32 np0005542546 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:32 np0005542546 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:32 np0005542546 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:33 np0005542546 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:33 np0005542546 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:33 np0005542546 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:33 np0005542546 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:34 np0005542546 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:34 np0005542546 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:34 np0005542546 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:35 np0005542546 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:35 np0005542546 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:35 np0005542546 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:35 np0005542546 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:36 np0005542546 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:36 np0005542546 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:48:39 np0005542546 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 10:48:39 np0005542546 systemd[1]: Starting Time & Date Service...
Dec  2 10:48:39 np0005542546 systemd[1]: Started Time & Date Service.
Dec  2 10:48:40 np0005542546 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec  2 10:48:40 np0005542546 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:40 np0005542546 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:41 np0005542546 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764690520.607635-153-13220852277095/source _original_basename=tmpz7_xv0wq follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:41 np0005542546 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:41 np0005542546 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764690521.4349353-183-126837263826976/source _original_basename=tmptzme4yy3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:42 np0005542546 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:43 np0005542546 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764690522.4971719-231-55367087764702/source _original_basename=tmprz03rnqf follow=False checksum=3a1440758208a7ff90a6a51d370205d9deb30bcc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:43 np0005542546 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:48:43 np0005542546 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:48:44 np0005542546 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:48:44 np0005542546 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764690524.2262237-273-159139176433402/source _original_basename=tmpbtw5kwqj follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:48:45 np0005542546 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-6839-fae5-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:48:46 np0005542546 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-6839-fae5-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  2 10:48:47 np0005542546 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:49:04 np0005542546 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:49:10 np0005542546 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  2 10:49:40 np0005542546 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  2 10:49:40 np0005542546 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0241] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 10:49:41 np0005542546 systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0418] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0442] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0445] device (eth1): carrier: link connected
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0447] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0451] policy: auto-activating connection 'Wired connection 1' (0cca5ee4-d085-35f2-bb0e-3e3a7d58eff1)
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0455] device (eth1): Activation: starting connection 'Wired connection 1' (0cca5ee4-d085-35f2-bb0e-3e3a7d58eff1)
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0455] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0458] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0461] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 10:49:41 np0005542546 NetworkManager[855]: <info>  [1764690581.0464] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:49:42 np0005542546 python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-f90d-9ac9-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:49:49 np0005542546 python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:49:49 np0005542546 python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764690588.8356133-102-188985744485672/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=425e09d5aa3be72a7073ea4ce65a7c9692bbe0d7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:49:50 np0005542546 python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 10:49:50 np0005542546 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 10:49:50 np0005542546 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 10:49:50 np0005542546 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2332] caught SIGTERM, shutting down normally.
Dec  2 10:49:50 np0005542546 systemd[1]: Stopping Network Manager...
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2340] dhcp4 (eth0): canceled DHCP transaction
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2340] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2340] dhcp4 (eth0): state changed no lease
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2344] manager: NetworkManager state is now CONNECTING
Dec  2 10:49:50 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2483] dhcp4 (eth1): canceled DHCP transaction
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2484] dhcp4 (eth1): state changed no lease
Dec  2 10:49:50 np0005542546 NetworkManager[855]: <info>  [1764690590.2549] exiting (success)
Dec  2 10:49:50 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 10:49:50 np0005542546 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 10:49:50 np0005542546 systemd[1]: Stopped Network Manager.
Dec  2 10:49:50 np0005542546 systemd[1]: Starting Network Manager...
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.3125] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4bc2fc38-bd6d-4040-a7f5-cb188f94ca47)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.3127] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.3175] manager[0x5586bf88c070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 10:49:50 np0005542546 systemd[1]: Starting Hostname Service...
Dec  2 10:49:50 np0005542546 systemd[1]: Started Hostname Service.
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4080] hostname: hostname: using hostnamed
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4081] hostname: static hostname changed from (none) to "np0005542546.novalocal"
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4085] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4089] manager[0x5586bf88c070]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4089] manager[0x5586bf88c070]: rfkill: WWAN hardware radio set enabled
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4109] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4109] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4109] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4109] manager: Networking is enabled by state file
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4111] settings: Loaded settings plugin: keyfile (internal)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4114] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4136] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4144] dhcp: init: Using DHCP client 'internal'
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4147] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4151] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4154] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4159] device (lo): Activation: starting connection 'lo' (3326fae0-34b6-4b68-8a72-7a7ca30af2b3)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4164] device (eth0): carrier: link connected
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4167] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4170] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4170] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4174] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4178] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4182] device (eth1): carrier: link connected
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4185] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4188] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0cca5ee4-d085-35f2-bb0e-3e3a7d58eff1) (indicated)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4188] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4191] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4196] device (eth1): Activation: starting connection 'Wired connection 1' (0cca5ee4-d085-35f2-bb0e-3e3a7d58eff1)
Dec  2 10:49:50 np0005542546 systemd[1]: Started Network Manager.
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4202] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4205] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4206] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4208] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4209] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4212] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4214] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4216] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4218] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4265] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4267] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4283] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4286] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4304] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4306] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4311] device (lo): Activation: successful, device activated.
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4318] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Dec  2 10:49:50 np0005542546 systemd[1]: Starting Network Manager Wait Online...
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4325] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4400] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4426] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4429] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4431] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4434] device (eth0): Activation: successful, device activated.
Dec  2 10:49:50 np0005542546 NetworkManager[7193]: <info>  [1764690590.4438] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 10:49:50 np0005542546 python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-f90d-9ac9-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:50:00 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 10:50:20 np0005542546 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4046] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 10:50:35 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 10:50:35 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4266] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4269] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4278] device (eth1): Activation: successful, device activated.
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4284] manager: startup complete
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4286] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <warn>  [1764690635.4290] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4299] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 systemd[1]: Finished Network Manager Wait Online.
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4394] dhcp4 (eth1): canceled DHCP transaction
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4394] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4394] dhcp4 (eth1): state changed no lease
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4407] policy: auto-activating connection 'ci-private-network' (0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b)
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4410] device (eth1): Activation: starting connection 'ci-private-network' (0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b)
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4411] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4413] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4418] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4424] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4458] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4460] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 10:50:35 np0005542546 NetworkManager[7193]: <info>  [1764690635.4464] device (eth1): Activation: successful, device activated.
Dec  2 10:50:45 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 10:50:47 np0005542546 systemd[4299]: Starting Mark boot as successful...
Dec  2 10:50:47 np0005542546 systemd[4299]: Finished Mark boot as successful.
Dec  2 10:50:48 np0005542546 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:50:49 np0005542546 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764690648.4887426-259-273409387114117/source _original_basename=tmp37irbj7s follow=False checksum=858bff66e3e5a056c06c24c5b24775291419e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:51:49 np0005542546 systemd-logind[790]: Session 1 logged out. Waiting for processes to exit.
Dec  2 10:53:47 np0005542546 systemd[4299]: Created slice User Background Tasks Slice.
Dec  2 10:53:47 np0005542546 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec  2 10:53:47 np0005542546 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec  2 10:57:19 np0005542546 systemd-logind[790]: New session 3 of user zuul.
Dec  2 10:57:19 np0005542546 systemd[1]: Started Session 3 of User zuul.
Dec  2 10:57:19 np0005542546 python3[7511]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-76e6-deed-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:21 np0005542546 python3[7540]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:21 np0005542546 python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:21 np0005542546 python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:21 np0005542546 python3[7618]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:22 np0005542546 python3[7644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:22 np0005542546 python3[7722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:57:23 np0005542546 python3[7795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764691042.5888603-491-29910789763592/source _original_basename=tmpm292w4ky follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:57:24 np0005542546 python3[7845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 10:57:24 np0005542546 systemd[1]: Reloading.
Dec  2 10:57:24 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 10:57:25 np0005542546 python3[7900]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  2 10:57:26 np0005542546 python3[7926]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:26 np0005542546 python3[7954]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:26 np0005542546 python3[7982]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:26 np0005542546 python3[8010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:27 np0005542546 python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-76e6-deed-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:57:27 np0005542546 python3[8067]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 10:57:29 np0005542546 systemd[1]: session-3.scope: Deactivated successfully.
Dec  2 10:57:29 np0005542546 systemd[1]: session-3.scope: Consumed 3.881s CPU time.
Dec  2 10:57:29 np0005542546 systemd-logind[790]: Session 3 logged out. Waiting for processes to exit.
Dec  2 10:57:29 np0005542546 systemd-logind[790]: Removed session 3.
Dec  2 10:57:31 np0005542546 systemd-logind[790]: New session 4 of user zuul.
Dec  2 10:57:31 np0005542546 systemd[1]: Started Session 4 of User zuul.
Dec  2 10:57:31 np0005542546 python3[8101]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 10:57:51 np0005542546 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 10:57:51 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 10:58:02 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 10:58:13 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 10:58:14 np0005542546 setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  2 10:58:14 np0005542546 setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  2 10:58:27 np0005542546 kernel: SELinux:  Converting 388 SID table entries...
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 10:58:27 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 10:58:46 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 10:58:46 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 10:58:46 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 10:58:46 np0005542546 systemd[1]: Reloading.
Dec  2 10:58:47 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 10:58:47 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 10:58:50 np0005542546 python3[12479]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-3410-b9e3-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 10:58:51 np0005542546 kernel: evm: overlay not supported
Dec  2 10:58:51 np0005542546 systemd[4299]: Starting D-Bus User Message Bus...
Dec  2 10:58:51 np0005542546 dbus-broker-launch[13393]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  2 10:58:51 np0005542546 dbus-broker-launch[13393]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  2 10:58:51 np0005542546 systemd[4299]: Started D-Bus User Message Bus.
Dec  2 10:58:51 np0005542546 dbus-broker-lau[13393]: Ready
Dec  2 10:58:51 np0005542546 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 10:58:51 np0005542546 systemd[4299]: Created slice Slice /user.
Dec  2 10:58:51 np0005542546 systemd[4299]: podman-13249.scope: unit configures an IP firewall, but not running as root.
Dec  2 10:58:51 np0005542546 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec  2 10:58:51 np0005542546 systemd[4299]: Started podman-13249.scope.
Dec  2 10:58:51 np0005542546 systemd[4299]: Started podman-pause-0d91dd4d.scope.
Dec  2 10:58:53 np0005542546 python3[14301]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.44:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.44:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:58:53 np0005542546 python3[14301]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  2 10:58:53 np0005542546 systemd[1]: session-4.scope: Deactivated successfully.
Dec  2 10:58:53 np0005542546 systemd[1]: session-4.scope: Consumed 1min 12.163s CPU time.
Dec  2 10:58:53 np0005542546 systemd-logind[790]: Session 4 logged out. Waiting for processes to exit.
Dec  2 10:58:53 np0005542546 systemd-logind[790]: Removed session 4.
Dec  2 10:59:17 np0005542546 systemd-logind[790]: New session 5 of user zuul.
Dec  2 10:59:17 np0005542546 systemd[1]: Started Session 5 of User zuul.
Dec  2 10:59:17 np0005542546 python3[24944]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjJ3iKmhe23Jq+pU9zk5dIfAh3+LXL7Ly3NUhZQRlsKOVcz9aFOr31Zee0kNdEY4jCRTplIreiR1MQQAQWJKqc= zuul@np0005542545.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:59:19 np0005542546 python3[25726]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjJ3iKmhe23Jq+pU9zk5dIfAh3+LXL7Ly3NUhZQRlsKOVcz9aFOr31Zee0kNdEY4jCRTplIreiR1MQQAQWJKqc= zuul@np0005542545.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:59:20 np0005542546 python3[26131]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005542546.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  2 10:59:20 np0005542546 python3[26350]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjJ3iKmhe23Jq+pU9zk5dIfAh3+LXL7Ly3NUhZQRlsKOVcz9aFOr31Zee0kNdEY4jCRTplIreiR1MQQAQWJKqc= zuul@np0005542545.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 10:59:21 np0005542546 python3[26655]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 10:59:21 np0005542546 python3[26938]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764691160.8615668-135-20143429892145/source _original_basename=tmp3teii4c4 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 10:59:22 np0005542546 python3[27308]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  2 10:59:22 np0005542546 systemd[1]: Starting Hostname Service...
Dec  2 10:59:22 np0005542546 systemd[1]: Started Hostname Service.
Dec  2 10:59:22 np0005542546 systemd-hostnamed[27443]: Changed pretty hostname to 'compute-0'
Dec  2 10:59:22 np0005542546 systemd-hostnamed[27443]: Hostname set to <compute-0> (static)
Dec  2 10:59:22 np0005542546 NetworkManager[7193]: <info>  [1764691162.4626] hostname: static hostname changed from "np0005542546.novalocal" to "compute-0"
Dec  2 10:59:22 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 10:59:22 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 10:59:23 np0005542546 systemd[1]: session-5.scope: Deactivated successfully.
Dec  2 10:59:23 np0005542546 systemd[1]: session-5.scope: Consumed 2.168s CPU time.
Dec  2 10:59:23 np0005542546 systemd-logind[790]: Session 5 logged out. Waiting for processes to exit.
Dec  2 10:59:23 np0005542546 systemd-logind[790]: Removed session 5.
Dec  2 10:59:28 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 10:59:28 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 10:59:28 np0005542546 systemd[1]: man-db-cache-update.service: Consumed 49.229s CPU time.
Dec  2 10:59:28 np0005542546 systemd[1]: run-rc523e4f076324870b2a1a6dd230e18f6.service: Deactivated successfully.
Dec  2 10:59:32 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 10:59:52 np0005542546 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 11:02:46 np0005542546 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  2 11:02:46 np0005542546 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  2 11:02:46 np0005542546 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  2 11:02:46 np0005542546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  2 11:04:54 np0005542546 systemd-logind[790]: New session 6 of user zuul.
Dec  2 11:04:54 np0005542546 systemd[1]: Started Session 6 of User zuul.
Dec  2 11:04:55 np0005542546 python3[30069]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:04:56 np0005542546 python3[30186]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:04:57 np0005542546 python3[30259]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:04:57 np0005542546 python3[30285]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:04:58 np0005542546 python3[30358]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:04:58 np0005542546 python3[30384]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:04:58 np0005542546 python3[30457]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:04:58 np0005542546 python3[30483]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:04:59 np0005542546 python3[30556]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:04:59 np0005542546 python3[30582]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:04:59 np0005542546 python3[30655]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:05:00 np0005542546 python3[30681]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:05:00 np0005542546 python3[30754]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:05:00 np0005542546 python3[30780]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 11:05:01 np0005542546 python3[30853]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764691496.626337-33647-127098659777708/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:07:47 np0005542546 python3[30946]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:12:46 np0005542546 systemd-logind[790]: Session 6 logged out. Waiting for processes to exit.
Dec  2 11:12:46 np0005542546 systemd[1]: session-6.scope: Deactivated successfully.
Dec  2 11:12:46 np0005542546 systemd[1]: session-6.scope: Consumed 4.894s CPU time.
Dec  2 11:12:46 np0005542546 systemd-logind[790]: Removed session 6.
Dec  2 11:20:39 np0005542546 systemd-logind[790]: New session 7 of user zuul.
Dec  2 11:20:39 np0005542546 systemd[1]: Started Session 7 of User zuul.
Dec  2 11:20:40 np0005542546 python3.9[31214]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:20:42 np0005542546 python3.9[31395]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:20:50 np0005542546 systemd[1]: session-7.scope: Deactivated successfully.
Dec  2 11:20:50 np0005542546 systemd[1]: session-7.scope: Consumed 8.074s CPU time.
Dec  2 11:20:50 np0005542546 systemd-logind[790]: Session 7 logged out. Waiting for processes to exit.
Dec  2 11:20:50 np0005542546 systemd-logind[790]: Removed session 7.
Dec  2 11:21:01 np0005542546 systemd-logind[790]: New session 8 of user zuul.
Dec  2 11:21:01 np0005542546 systemd[1]: Started Session 8 of User zuul.
Dec  2 11:21:02 np0005542546 python3.9[31609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:21:02 np0005542546 systemd[1]: session-8.scope: Deactivated successfully.
Dec  2 11:21:02 np0005542546 systemd-logind[790]: Session 8 logged out. Waiting for processes to exit.
Dec  2 11:21:02 np0005542546 systemd-logind[790]: Removed session 8.
Dec  2 11:21:20 np0005542546 systemd-logind[790]: New session 9 of user zuul.
Dec  2 11:21:20 np0005542546 systemd[1]: Started Session 9 of User zuul.
Dec  2 11:21:20 np0005542546 python3.9[31790]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  2 11:21:22 np0005542546 python3.9[31964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:21:23 np0005542546 python3.9[32116]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:21:24 np0005542546 python3.9[32269]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:21:24 np0005542546 python3.9[32421]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:21:25 np0005542546 python3.9[32573]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:21:26 np0005542546 python3.9[32696]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692484.9880614-73-259655996002036/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:21:27 np0005542546 python3.9[32848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:21:27 np0005542546 python3.9[33004]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:21:28 np0005542546 python3.9[33156]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:21:29 np0005542546 python3.9[33306]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:21:32 np0005542546 python3.9[33559]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:21:33 np0005542546 python3.9[33709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:21:34 np0005542546 python3.9[33863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:21:35 np0005542546 python3.9[34021]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:21:36 np0005542546 python3.9[34105]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:22:20 np0005542546 systemd[1]: Reloading.
Dec  2 11:22:20 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:22:20 np0005542546 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  2 11:22:21 np0005542546 systemd[1]: Reloading.
Dec  2 11:22:21 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:22:21 np0005542546 systemd[1]: Starting dnf makecache...
Dec  2 11:22:21 np0005542546 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  2 11:22:21 np0005542546 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  2 11:22:21 np0005542546 systemd[1]: Reloading.
Dec  2 11:22:21 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:22:21 np0005542546 dnf[34355]: Failed determining last makecache time.
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-barbican-42b4c41831408a8e323 154 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 165 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-cinder-1c00d6490d88e436f26ef 193 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-stevedore-c4acc5639fd2329372142 196 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-cloudkitty-tests-tempest-2c80f8 186 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 187 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 201 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-designate-tests-tempest-347fdbc 186 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-glance-1fd12c29b339f30fe823e 170 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 175 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-manila-3c01b7181572c95dac462 190 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-whitebox-neutron-tests-tempest- 183 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-octavia-ba397f07a7331190208c 197 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-watcher-c014f81a8647287f6dcc 202 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:22:21 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-ansible-config_template-5ccaa22121a7ff 185 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 186 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-python-tempestconf-8515371b7cceebd4282 210 kB/s | 3.0 kB     00:00
Dec  2 11:22:21 np0005542546 dnf[34355]: delorean-openstack-heat-ui-013accbfd179753bc3f0 202 kB/s | 3.0 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: CentOS Stream 9 - BaseOS                         24 kB/s | 5.9 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: CentOS Stream 9 - AppStream                      61 kB/s | 6.0 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: CentOS Stream 9 - CRB                            52 kB/s | 5.8 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: CentOS Stream 9 - Extras packages                27 kB/s | 8.3 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: dlrn-antelope-testing                           167 kB/s | 3.0 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: dlrn-antelope-build-deps                        186 kB/s | 3.0 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: centos9-rabbitmq                                122 kB/s | 3.0 kB     00:00
Dec  2 11:22:22 np0005542546 dnf[34355]: centos9-storage                                  42 kB/s | 3.0 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: centos9-opstools                                135 kB/s | 3.0 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: NFV SIG OpenvSwitch                             133 kB/s | 3.0 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: repo-setup-centos-appstream                     202 kB/s | 4.4 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: repo-setup-centos-baseos                        188 kB/s | 3.9 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: repo-setup-centos-highavailability              165 kB/s | 3.9 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: repo-setup-centos-powertools                    197 kB/s | 4.3 kB     00:00
Dec  2 11:22:23 np0005542546 dnf[34355]: Extra Packages for Enterprise Linux 9 - x86_64  100 kB/s |  32 kB     00:00
Dec  2 11:22:24 np0005542546 dnf[34355]: Metadata cache created.
Dec  2 11:22:24 np0005542546 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  2 11:22:24 np0005542546 systemd[1]: Finished dnf makecache.
Dec  2 11:22:24 np0005542546 systemd[1]: dnf-makecache.service: Consumed 1.811s CPU time.
Dec  2 11:23:30 np0005542546 kernel: SELinux:  Converting 2719 SID table entries...
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:23:30 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:23:30 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  2 11:23:30 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:23:30 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:23:30 np0005542546 systemd[1]: Reloading.
Dec  2 11:23:30 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:23:30 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:23:31 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:23:31 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:23:31 np0005542546 systemd[1]: man-db-cache-update.service: Consumed 1.225s CPU time.
Dec  2 11:23:31 np0005542546 systemd[1]: run-rd459e6593846480fadc25125cfa1fc69.service: Deactivated successfully.
Dec  2 11:23:32 np0005542546 python3.9[35699]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:23:34 np0005542546 python3.9[35982]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  2 11:23:34 np0005542546 python3.9[36134]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  2 11:23:37 np0005542546 python3.9[36287]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:23:38 np0005542546 python3.9[36439]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  2 11:23:39 np0005542546 python3.9[36591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:23:40 np0005542546 python3.9[36743]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:23:40 np0005542546 python3.9[36866]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692619.755366-236-185024365029629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:23:41 np0005542546 python3.9[37018]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:23:42 np0005542546 python3.9[37170]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:23:43 np0005542546 python3.9[37323]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:23:44 np0005542546 python3.9[37475]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  2 11:23:44 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:23:44 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:23:44 np0005542546 python3.9[37629]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:23:45 np0005542546 python3.9[37787]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 11:23:51 np0005542546 python3.9[37947]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  2 11:23:52 np0005542546 python3.9[38100]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:23:52 np0005542546 python3.9[38258]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  2 11:23:53 np0005542546 python3.9[38410]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:23:56 np0005542546 python3.9[38563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:23:56 np0005542546 python3.9[38715]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:23:57 np0005542546 python3.9[38838]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764692636.4133203-355-126811832350407/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:23:58 np0005542546 python3.9[38990]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:23:58 np0005542546 systemd[1]: Starting Load Kernel Modules...
Dec  2 11:23:58 np0005542546 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  2 11:23:58 np0005542546 kernel: Bridge firewalling registered
Dec  2 11:23:58 np0005542546 systemd-modules-load[38994]: Inserted module 'br_netfilter'
Dec  2 11:23:58 np0005542546 systemd[1]: Finished Load Kernel Modules.
Dec  2 11:23:59 np0005542546 python3.9[39149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:23:59 np0005542546 python3.9[39272]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764692638.9347448-378-178461838853854/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:00 np0005542546 python3.9[39424]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:24:04 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:24:04 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:24:05 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:24:05 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:24:05 np0005542546 systemd[1]: Reloading.
Dec  2 11:24:05 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:24:05 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:24:07 np0005542546 python3.9[40940]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:24:07 np0005542546 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  2 11:24:07 np0005542546 irqbalance[785]: IRQ 26 affinity is now unmanaged
Dec  2 11:24:07 np0005542546 python3.9[41899]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  2 11:24:08 np0005542546 python3.9[42581]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:24:09 np0005542546 python3.9[43420]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:09 np0005542546 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 11:24:09 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:24:09 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:24:09 np0005542546 systemd[1]: man-db-cache-update.service: Consumed 5.095s CPU time.
Dec  2 11:24:09 np0005542546 systemd[1]: run-rf83e5fd5ac084e28ae272c788e1be7c6.service: Deactivated successfully.
Dec  2 11:24:09 np0005542546 systemd[1]: Starting Authorization Manager...
Dec  2 11:24:09 np0005542546 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 11:24:09 np0005542546 polkitd[43838]: Started polkitd version 0.117
Dec  2 11:24:09 np0005542546 systemd[1]: Started Authorization Manager.
Dec  2 11:24:10 np0005542546 python3.9[44008]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:24:10 np0005542546 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  2 11:24:10 np0005542546 systemd[1]: tuned.service: Deactivated successfully.
Dec  2 11:24:10 np0005542546 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  2 11:24:10 np0005542546 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 11:24:10 np0005542546 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 11:24:11 np0005542546 python3.9[44170]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  2 11:24:13 np0005542546 python3.9[44322]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:24:13 np0005542546 systemd[1]: Reloading.
Dec  2 11:24:13 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:24:14 np0005542546 python3.9[44511]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:24:14 np0005542546 systemd[1]: Reloading.
Dec  2 11:24:15 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:24:15 np0005542546 python3.9[44700]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:16 np0005542546 python3.9[44853]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:16 np0005542546 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  2 11:24:17 np0005542546 python3.9[45006]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:19 np0005542546 python3.9[45168]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:20 np0005542546 python3.9[45321]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:24:20 np0005542546 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 11:24:20 np0005542546 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 11:24:20 np0005542546 systemd[1]: Stopping Apply Kernel Variables...
Dec  2 11:24:20 np0005542546 systemd[1]: Starting Apply Kernel Variables...
Dec  2 11:24:20 np0005542546 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 11:24:20 np0005542546 systemd[1]: Finished Apply Kernel Variables.
Dec  2 11:24:20 np0005542546 systemd[1]: session-9.scope: Deactivated successfully.
Dec  2 11:24:20 np0005542546 systemd[1]: session-9.scope: Consumed 2min 21.547s CPU time.
Dec  2 11:24:20 np0005542546 systemd-logind[790]: Session 9 logged out. Waiting for processes to exit.
Dec  2 11:24:20 np0005542546 systemd-logind[790]: Removed session 9.
Dec  2 11:24:26 np0005542546 systemd-logind[790]: New session 10 of user zuul.
Dec  2 11:24:26 np0005542546 systemd[1]: Started Session 10 of User zuul.
Dec  2 11:24:27 np0005542546 python3.9[45504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:24:28 np0005542546 python3.9[45658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:24:29 np0005542546 python3.9[45814]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:30 np0005542546 python3.9[45965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:24:31 np0005542546 python3.9[46121]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:24:32 np0005542546 python3.9[46205]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:24:34 np0005542546 python3.9[46360]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:24:35 np0005542546 python3.9[46531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:24:36 np0005542546 python3.9[46683]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:24:36 np0005542546 systemd[1]: var-lib-containers-storage-overlay-compat2387114718-merged.mount: Deactivated successfully.
Dec  2 11:24:36 np0005542546 podman[46684]: 2025-12-02 16:24:36.553523291 +0000 UTC m=+0.061585315 system refresh
Dec  2 11:24:37 np0005542546 python3.9[46847]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:24:37 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:24:38 np0005542546 python3.9[46970]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692676.8010507-109-240641996588614/.source.json follow=False _original_basename=podman_network_config.j2 checksum=de619081fdf6ffbc558b6bb5ba38269ebcdf6cfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:24:38 np0005542546 python3.9[47122]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:24:39 np0005542546 python3.9[47245]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764692678.389216-124-231033698788056/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ea7e71ddf075bf55e555c64399d15b2ffe005fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:40 np0005542546 python3.9[47397]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:40 np0005542546 python3.9[47549]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:41 np0005542546 python3.9[47701]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:42 np0005542546 python3.9[47853]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:24:43 np0005542546 python3.9[48003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:24:43 np0005542546 python3.9[48157]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:46 np0005542546 python3.9[48310]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:47 np0005542546 irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  2 11:24:47 np0005542546 irqbalance[785]: IRQ 27 affinity is now unmanaged
Dec  2 11:24:48 np0005542546 python3.9[48470]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:51 np0005542546 python3.9[48623]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:53 np0005542546 python3.9[48776]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:55 np0005542546 python3.9[48932]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:24:59 np0005542546 python3.9[49101]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:25:01 np0005542546 python3.9[49254]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:25:13 np0005542546 python3.9[49592]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:25:15 np0005542546 python3.9[49748]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:25:16 np0005542546 python3.9[49923]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:25:16 np0005542546 python3.9[50046]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764692715.6874835-272-276287882095014/.source.json _original_basename=.muw7c9cp follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:25:19 np0005542546 python3.9[50198]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:25:19 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:22 np0005542546 systemd[1]: var-lib-containers-storage-overlay-compat2285408699-lower\x2dmapped.mount: Deactivated successfully.
Dec  2 11:25:25 np0005542546 podman[50210]: 2025-12-02 16:25:25.746766174 +0000 UTC m=+5.927325375 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 11:25:25 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:25 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:25 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:26 np0005542546 python3.9[50508]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:25:26 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:37 np0005542546 podman[50520]: 2025-12-02 16:25:37.975542236 +0000 UTC m=+11.117650120 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 11:25:37 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:38 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:38 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:39 np0005542546 python3.9[50819]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:25:39 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:41 np0005542546 podman[50832]: 2025-12-02 16:25:41.038145791 +0000 UTC m=+1.635447691 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 11:25:41 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:41 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:41 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:42 np0005542546 python3.9[51069]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:25:42 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:55 np0005542546 podman[51082]: 2025-12-02 16:25:55.381068806 +0000 UTC m=+13.296610511 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 11:25:55 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:55 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:55 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:25:56 np0005542546 python3.9[51359]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:25:56 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:11 np0005542546 podman[51371]: 2025-12-02 16:26:11.420053172 +0000 UTC m=+14.812853481 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  2 11:26:11 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:11 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:11 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:12 np0005542546 python3.9[51689]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:26:14 np0005542546 podman[51701]: 2025-12-02 16:26:14.755564198 +0000 UTC m=+2.472364582 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  2 11:26:14 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:14 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:14 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:16 np0005542546 python3.9[51977]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:26:16 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:18 np0005542546 podman[51990]: 2025-12-02 16:26:18.944923173 +0000 UTC m=+2.636917002 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  2 11:26:18 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:19 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:19 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:19 np0005542546 python3.9[52248]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  2 11:26:28 np0005542546 podman[52260]: 2025-12-02 16:26:28.366547052 +0000 UTC m=+8.521091695 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  2 11:26:28 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:28 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:28 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:26:29 np0005542546 systemd[1]: session-10.scope: Deactivated successfully.
Dec  2 11:26:29 np0005542546 systemd[1]: session-10.scope: Consumed 2min 33.048s CPU time.
Dec  2 11:26:29 np0005542546 systemd-logind[790]: Session 10 logged out. Waiting for processes to exit.
Dec  2 11:26:29 np0005542546 systemd-logind[790]: Removed session 10.
Dec  2 11:26:35 np0005542546 systemd-logind[790]: New session 11 of user zuul.
Dec  2 11:26:35 np0005542546 systemd[1]: Started Session 11 of User zuul.
Dec  2 11:26:36 np0005542546 python3.9[52658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:26:37 np0005542546 python3.9[52814]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  2 11:26:38 np0005542546 python3.9[52967]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:26:39 np0005542546 python3.9[53125]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 11:26:40 np0005542546 python3.9[53285]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:26:41 np0005542546 python3.9[53369]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:26:43 np0005542546 python3.9[53531]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:26:56 np0005542546 kernel: SELinux:  Converting 2732 SID table entries...
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:26:56 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:26:56 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  2 11:26:56 np0005542546 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  2 11:26:58 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:26:58 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:26:58 np0005542546 systemd[1]: Reloading.
Dec  2 11:26:58 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:26:58 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:26:58 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:26:59 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:26:59 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:26:59 np0005542546 systemd[1]: run-r4ecd53ce31d04dfa9e1e23f8b5dea515.service: Deactivated successfully.
Dec  2 11:27:00 np0005542546 python3.9[54629]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:27:00 np0005542546 systemd[1]: Reloading.
Dec  2 11:27:00 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:27:00 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:27:00 np0005542546 systemd[1]: Starting Open vSwitch Database Unit...
Dec  2 11:27:00 np0005542546 chown[54672]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  2 11:27:00 np0005542546 ovs-ctl[54677]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  2 11:27:00 np0005542546 ovs-ctl[54677]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  2 11:27:00 np0005542546 ovs-ctl[54677]: Starting ovsdb-server [  OK  ]
Dec  2 11:27:00 np0005542546 ovs-vsctl[54726]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  2 11:27:00 np0005542546 ovs-vsctl[54746]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"000c10a1-5e88-4874-8132-a124d4da5271\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  2 11:27:00 np0005542546 ovs-ctl[54677]: Configuring Open vSwitch system IDs [  OK  ]
Dec  2 11:27:00 np0005542546 ovs-vsctl[54752]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 11:27:00 np0005542546 ovs-ctl[54677]: Enabling remote OVSDB managers [  OK  ]
Dec  2 11:27:00 np0005542546 systemd[1]: Started Open vSwitch Database Unit.
Dec  2 11:27:00 np0005542546 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  2 11:27:00 np0005542546 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  2 11:27:00 np0005542546 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  2 11:27:01 np0005542546 kernel: openvswitch: Open vSwitch switching datapath
Dec  2 11:27:01 np0005542546 ovs-ctl[54796]: Inserting openvswitch module [  OK  ]
Dec  2 11:27:01 np0005542546 ovs-ctl[54765]: Starting ovs-vswitchd [  OK  ]
Dec  2 11:27:01 np0005542546 ovs-vsctl[54813]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 11:27:01 np0005542546 ovs-ctl[54765]: Enabling remote OVSDB managers [  OK  ]
Dec  2 11:27:01 np0005542546 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  2 11:27:01 np0005542546 systemd[1]: Starting Open vSwitch...
Dec  2 11:27:01 np0005542546 systemd[1]: Finished Open vSwitch.
Dec  2 11:27:02 np0005542546 python3.9[54965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:27:03 np0005542546 python3.9[55117]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  2 11:27:04 np0005542546 kernel: SELinux:  Converting 2746 SID table entries...
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:27:04 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:27:05 np0005542546 python3.9[55272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:27:06 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  2 11:27:06 np0005542546 python3.9[55430]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:27:08 np0005542546 python3.9[55583]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:27:09 np0005542546 python3.9[55870]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 11:27:10 np0005542546 python3.9[56020]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:27:11 np0005542546 python3.9[56174]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:27:13 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:27:13 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:27:13 np0005542546 systemd[1]: Reloading.
Dec  2 11:27:13 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:27:13 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:27:13 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:27:13 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:27:13 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:27:13 np0005542546 systemd[1]: run-r95d3abcce93d40b1b2e593ed9fedf9cb.service: Deactivated successfully.
Dec  2 11:27:14 np0005542546 python3.9[56493]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:27:14 np0005542546 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 11:27:14 np0005542546 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 11:27:14 np0005542546 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 11:27:14 np0005542546 systemd[1]: Stopping Network Manager...
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.7963] caught SIGTERM, shutting down normally.
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.7976] dhcp4 (eth0): canceled DHCP transaction
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.7977] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.7977] dhcp4 (eth0): state changed no lease
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.7979] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 11:27:14 np0005542546 NetworkManager[7193]: <info>  [1764692834.8057] exiting (success)
Dec  2 11:27:14 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 11:27:14 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 11:27:14 np0005542546 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 11:27:14 np0005542546 systemd[1]: Stopped Network Manager.
Dec  2 11:27:14 np0005542546 systemd[1]: NetworkManager.service: Consumed 15.994s CPU time, 4.1M memory peak, read 0B from disk, written 30.5K to disk.
Dec  2 11:27:14 np0005542546 systemd[1]: Starting Network Manager...
Dec  2 11:27:14 np0005542546 NetworkManager[56503]: <info>  [1764692834.8802] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4bc2fc38-bd6d-4040-a7f5-cb188f94ca47)
Dec  2 11:27:14 np0005542546 NetworkManager[56503]: <info>  [1764692834.8805] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 11:27:14 np0005542546 NetworkManager[56503]: <info>  [1764692834.8871] manager[0x561e05b66090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 11:27:14 np0005542546 systemd[1]: Starting Hostname Service...
Dec  2 11:27:15 np0005542546 systemd[1]: Started Hostname Service.
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0076] hostname: hostname: using hostnamed
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0077] hostname: static hostname changed from (none) to "compute-0"
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0083] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0088] manager[0x561e05b66090]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0088] manager[0x561e05b66090]: rfkill: WWAN hardware radio set enabled
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0114] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0126] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0127] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0130] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0131] manager: Networking is enabled by state file
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0135] settings: Loaded settings plugin: keyfile (internal)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0140] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0170] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0182] dhcp: init: Using DHCP client 'internal'
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0185] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0190] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0196] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0207] device (lo): Activation: starting connection 'lo' (3326fae0-34b6-4b68-8a72-7a7ca30af2b3)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0216] device (eth0): carrier: link connected
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0221] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0226] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0226] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0233] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0240] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0247] device (eth1): carrier: link connected
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0251] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0257] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b) (indicated)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0258] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0266] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0276] device (eth1): Activation: starting connection 'ci-private-network' (0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b)
Dec  2 11:27:15 np0005542546 systemd[1]: Started Network Manager.
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0284] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0304] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0309] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0312] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0317] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0324] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0327] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0331] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0336] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0346] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0350] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0361] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0379] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0389] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0392] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0399] device (lo): Activation: successful, device activated.
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0407] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0416] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 11:27:15 np0005542546 systemd[1]: Starting Network Manager Wait Online...
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0486] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0492] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0500] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0504] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0508] device (eth1): Activation: successful, device activated.
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0518] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0520] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0524] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0527] device (eth0): Activation: successful, device activated.
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0534] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 11:27:15 np0005542546 NetworkManager[56503]: <info>  [1764692835.0565] manager: startup complete
Dec  2 11:27:15 np0005542546 systemd[1]: Finished Network Manager Wait Online.
Dec  2 11:27:15 np0005542546 python3.9[56719]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:27:21 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:27:21 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:27:21 np0005542546 systemd[1]: Reloading.
Dec  2 11:27:21 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:27:21 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:27:21 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:27:22 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:27:22 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:27:22 np0005542546 systemd[1]: run-rb37f957a7cb04a5db83db12c77e5b87c.service: Deactivated successfully.
Dec  2 11:27:22 np0005542546 python3.9[57177]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:27:23 np0005542546 python3.9[57329]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:24 np0005542546 python3.9[57483]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:25 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 11:27:25 np0005542546 python3.9[57635]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:26 np0005542546 python3.9[57787]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:26 np0005542546 python3.9[57939]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:27 np0005542546 python3.9[58091]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:27:27 np0005542546 python3.9[58214]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692846.810287-229-223109548783681/.source _original_basename=.v5dly4ip follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:28 np0005542546 python3.9[58366]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:29 np0005542546 python3.9[58518]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  2 11:27:30 np0005542546 python3.9[58670]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:32 np0005542546 python3.9[59097]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  2 11:27:33 np0005542546 ansible-async_wrapper.py[59272]: Invoked with j662208166567 300 /home/zuul/.ansible/tmp/ansible-tmp-1764692852.8716345-295-257804374365832/AnsiballZ_edpm_os_net_config.py _
Dec  2 11:27:33 np0005542546 ansible-async_wrapper.py[59275]: Starting module and watcher
Dec  2 11:27:33 np0005542546 ansible-async_wrapper.py[59275]: Start watching 59276 (300)
Dec  2 11:27:33 np0005542546 ansible-async_wrapper.py[59276]: Start module (59276)
Dec  2 11:27:33 np0005542546 ansible-async_wrapper.py[59272]: Return async_wrapper task started.
Dec  2 11:27:34 np0005542546 python3.9[59277]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  2 11:27:34 np0005542546 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  2 11:27:34 np0005542546 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  2 11:27:34 np0005542546 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  2 11:27:34 np0005542546 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  2 11:27:34 np0005542546 kernel: cfg80211: failed to load regulatory.db
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.8727] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.8744] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9342] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9344] audit: op="connection-add" uuid="16704b4e-5d01-4a9b-8365-00e0e0006d99" name="br-ex-br" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9367] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9368] audit: op="connection-add" uuid="68c3894e-1949-42cf-96f9-9beb3b14ae7b" name="br-ex-port" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9382] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9383] audit: op="connection-add" uuid="6af0307d-5210-4b8a-84ef-0a08b7c3cfd7" name="eth1-port" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9397] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9399] audit: op="connection-add" uuid="6bf576c6-b1c1-4322-aaa6-8f6466540bae" name="vlan20-port" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9415] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9416] audit: op="connection-add" uuid="f832fce7-eb74-4e69-aa29-9ac403bd4d5f" name="vlan21-port" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9432] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9435] audit: op="connection-add" uuid="1ca2ba6e-d1f1-4496-9a22-3b4baa0c5ee2" name="vlan22-port" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9458] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9478] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9479] audit: op="connection-add" uuid="6ca317a7-61ba-4594-963a-e59b4e63c6fb" name="br-ex-if" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9620] audit: op="connection-update" uuid="0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b" name="ci-private-network" args="connection.controller,connection.slave-type,connection.port-type,connection.master,connection.timestamp,ovs-interface.type,ipv4.routing-rules,ipv4.never-default,ipv4.method,ipv4.dns,ipv4.addresses,ipv4.routes,ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.method,ipv6.dns,ipv6.addresses,ipv6.routes" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9644] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9646] audit: op="connection-add" uuid="a38247b5-61ba-4e30-87dd-6209ce23c78b" name="vlan20-if" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9661] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9663] audit: op="connection-add" uuid="3c68fc95-09dd-4b71-912f-b50452de959f" name="vlan21-if" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9682] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9684] audit: op="connection-add" uuid="69090667-d0d2-4e85-bca8-b9dbb91d2b35" name="vlan22-if" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9696] audit: op="connection-delete" uuid="0cca5ee4-d085-35f2-bb0e-3e3a7d58eff1" name="Wired connection 1" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9710] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9721] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9726] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (16704b4e-5d01-4a9b-8365-00e0e0006d99)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9726] audit: op="connection-activate" uuid="16704b4e-5d01-4a9b-8365-00e0e0006d99" name="br-ex-br" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9728] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9735] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9740] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (68c3894e-1949-42cf-96f9-9beb3b14ae7b)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9742] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9748] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9753] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (6af0307d-5210-4b8a-84ef-0a08b7c3cfd7)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9755] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9762] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9766] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6bf576c6-b1c1-4322-aaa6-8f6466540bae)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9768] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9775] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9779] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f832fce7-eb74-4e69-aa29-9ac403bd4d5f)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9782] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9788] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9791] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1ca2ba6e-d1f1-4496-9a22-3b4baa0c5ee2)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9792] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9795] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9796] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9802] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9809] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9814] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (6ca317a7-61ba-4594-963a-e59b4e63c6fb)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9816] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9819] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9822] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9824] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9825] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9836] device (eth1): disconnecting for new activation request.
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9836] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9838] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9840] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9841] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9843] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9847] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9851] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (a38247b5-61ba-4e30-87dd-6209ce23c78b)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9852] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9855] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9857] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9859] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9862] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9866] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9871] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (3c68fc95-09dd-4b71-912f-b50452de959f)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9872] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9875] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9876] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9877] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9879] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9884] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9888] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (69090667-d0d2-4e85-bca8-b9dbb91d2b35)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9888] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9891] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9892] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9893] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9894] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9907] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method" pid=59278 uid=0 result="success"
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9909] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9913] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9915] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9922] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9925] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9929] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9931] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9933] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9937] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9941] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 kernel: ovs-system: entered promiscuous mode
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9944] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9946] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9951] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9955] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9957] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9958] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9962] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9967] dhcp4 (eth0): canceled DHCP transaction
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9967] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9967] dhcp4 (eth0): state changed no lease
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9969] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  2 11:27:35 np0005542546 kernel: Timeout policy base is empty
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9981] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  2 11:27:35 np0005542546 NetworkManager[56503]: <info>  [1764692855.9984] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59278 uid=0 result="fail" reason="Device is not activated"
Dec  2 11:27:35 np0005542546 systemd-udevd[59283]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 11:27:36 np0005542546 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0022] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0068] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0079] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0086] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0090] device (eth1): disconnecting for new activation request.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0091] audit: op="connection-activate" uuid="0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b" name="ci-private-network" pid=59278 uid=0 result="success"
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0123] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59278 uid=0 result="success"
Dec  2 11:27:36 np0005542546 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0142] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0272] device (eth1): Activation: starting connection 'ci-private-network' (0442d5be-8c08-5ab4-bf4a-7d8b6e04d93b)
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0282] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0285] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0292] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0294] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0295] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0296] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0296] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0298] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0339] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0352] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0356] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0360] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0364] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0368] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0371] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0374] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0379] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0383] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0388] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0391] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0395] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 kernel: br-ex: entered promiscuous mode
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0405] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0410] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0431] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0435] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0440] device (eth1): Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0511] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0523] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 kernel: vlan22: entered promiscuous mode
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0546] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0548] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0552] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0662] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  2 11:27:36 np0005542546 kernel: vlan20: entered promiscuous mode
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0681] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0765] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0773] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0782] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0784] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0789] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 kernel: vlan21: entered promiscuous mode
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0805] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0806] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0811] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0875] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0885] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0901] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0902] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 11:27:36 np0005542546 NetworkManager[56503]: <info>  [1764692856.0909] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.1912] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.3379] checkpoint[0x561e05b3b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.3382] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.6269] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.6282] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 python3.9[59611]: ansible-ansible.legacy.async_status Invoked with jid=j662208166567.59272 mode=status _async_dir=/root/.ansible_async
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.8361] audit: op="networking-control" arg="global-dns-configuration" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.8395] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.8432] audit: op="networking-control" arg="global-dns-configuration" pid=59278 uid=0 result="success"
Dec  2 11:27:37 np0005542546 NetworkManager[56503]: <info>  [1764692857.8455] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59278 uid=0 result="success"
Dec  2 11:27:38 np0005542546 NetworkManager[56503]: <info>  [1764692858.0130] checkpoint[0x561e05b3ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  2 11:27:38 np0005542546 NetworkManager[56503]: <info>  [1764692858.0137] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59278 uid=0 result="success"
Dec  2 11:27:38 np0005542546 ansible-async_wrapper.py[59276]: Module complete (59276)
Dec  2 11:27:38 np0005542546 ansible-async_wrapper.py[59275]: Done in kid B.
Dec  2 11:27:41 np0005542546 python3.9[59716]: ansible-ansible.legacy.async_status Invoked with jid=j662208166567.59272 mode=status _async_dir=/root/.ansible_async
Dec  2 11:27:41 np0005542546 python3.9[59816]: ansible-ansible.legacy.async_status Invoked with jid=j662208166567.59272 mode=cleanup _async_dir=/root/.ansible_async
Dec  2 11:27:42 np0005542546 python3.9[59968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:27:42 np0005542546 python3.9[60091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692861.8475137-322-142300699526295/.source.returncode _original_basename=.u0ntm8hz follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:43 np0005542546 python3.9[60243]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:27:44 np0005542546 python3.9[60366]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692863.2222874-338-55366948531139/.source.cfg _original_basename=._qtct9uc follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:27:45 np0005542546 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 11:27:45 np0005542546 python3.9[60519]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:27:45 np0005542546 systemd[1]: Reloading Network Manager...
Dec  2 11:27:45 np0005542546 NetworkManager[56503]: <info>  [1764692865.2359] audit: op="reload" arg="0" pid=60525 uid=0 result="success"
Dec  2 11:27:45 np0005542546 NetworkManager[56503]: <info>  [1764692865.2370] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  2 11:27:45 np0005542546 systemd[1]: Reloaded Network Manager.
Dec  2 11:27:45 np0005542546 systemd[1]: session-11.scope: Deactivated successfully.
Dec  2 11:27:45 np0005542546 systemd[1]: session-11.scope: Consumed 50.996s CPU time.
Dec  2 11:27:45 np0005542546 systemd-logind[790]: Session 11 logged out. Waiting for processes to exit.
Dec  2 11:27:45 np0005542546 systemd-logind[790]: Removed session 11.
Dec  2 11:27:51 np0005542546 systemd-logind[790]: New session 12 of user zuul.
Dec  2 11:27:51 np0005542546 systemd[1]: Started Session 12 of User zuul.
Dec  2 11:27:52 np0005542546 python3.9[60709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:27:53 np0005542546 python3.9[60863]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:27:54 np0005542546 python3.9[61053]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:27:54 np0005542546 systemd[1]: session-12.scope: Deactivated successfully.
Dec  2 11:27:54 np0005542546 systemd[1]: session-12.scope: Consumed 2.504s CPU time.
Dec  2 11:27:54 np0005542546 systemd-logind[790]: Session 12 logged out. Waiting for processes to exit.
Dec  2 11:27:54 np0005542546 systemd-logind[790]: Removed session 12.
Dec  2 11:27:55 np0005542546 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 11:28:00 np0005542546 systemd-logind[790]: New session 13 of user zuul.
Dec  2 11:28:00 np0005542546 systemd[1]: Started Session 13 of User zuul.
Dec  2 11:28:01 np0005542546 python3.9[61235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:28:02 np0005542546 python3.9[61389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:28:03 np0005542546 python3.9[61545]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:28:03 np0005542546 python3.9[61630]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:28:05 np0005542546 python3.9[61783]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:28:06 np0005542546 python3.9[61975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:07 np0005542546 python3.9[62127]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:28:07 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:28:08 np0005542546 python3.9[62292]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:09 np0005542546 python3.9[62370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:09 np0005542546 python3.9[62522]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:10 np0005542546 python3.9[62600]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:11 np0005542546 python3.9[62752]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:11 np0005542546 python3.9[62904]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:12 np0005542546 python3.9[63056]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:12 np0005542546 python3.9[63208]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:13 np0005542546 python3.9[63360]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:28:15 np0005542546 python3.9[63513]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:28:16 np0005542546 python3.9[63667]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:28:17 np0005542546 python3.9[63819]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:28:17 np0005542546 python3.9[63971]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:28:18 np0005542546 python3.9[64124]: ansible-service_facts Invoked
Dec  2 11:28:18 np0005542546 network[64141]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:28:18 np0005542546 network[64142]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:28:18 np0005542546 network[64143]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:28:24 np0005542546 python3.9[64595]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:28:27 np0005542546 python3.9[64750]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  2 11:28:28 np0005542546 python3.9[64902]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:29 np0005542546 python3.9[65027]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692908.0006747-232-236392985840918/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:30 np0005542546 python3.9[65181]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:30 np0005542546 python3.9[65306]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692909.6024818-247-172604018565362/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:31 np0005542546 python3.9[65460]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:32 np0005542546 python3.9[65614]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:28:34 np0005542546 python3.9[65698]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:28:35 np0005542546 python3.9[65852]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:28:35 np0005542546 python3.9[65936]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:28:35 np0005542546 chronyd[793]: chronyd exiting
Dec  2 11:28:35 np0005542546 systemd[1]: Stopping NTP client/server...
Dec  2 11:28:36 np0005542546 systemd[1]: chronyd.service: Deactivated successfully.
Dec  2 11:28:36 np0005542546 systemd[1]: Stopped NTP client/server.
Dec  2 11:28:36 np0005542546 systemd[1]: Starting NTP client/server...
Dec  2 11:28:36 np0005542546 chronyd[65945]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 11:28:36 np0005542546 chronyd[65945]: Frequency -26.708 +/- 0.337 ppm read from /var/lib/chrony/drift
Dec  2 11:28:36 np0005542546 chronyd[65945]: Loaded seccomp filter (level 2)
Dec  2 11:28:36 np0005542546 systemd[1]: Started NTP client/server.
Dec  2 11:28:36 np0005542546 systemd[1]: session-13.scope: Deactivated successfully.
Dec  2 11:28:36 np0005542546 systemd[1]: session-13.scope: Consumed 25.923s CPU time.
Dec  2 11:28:36 np0005542546 systemd-logind[790]: Session 13 logged out. Waiting for processes to exit.
Dec  2 11:28:36 np0005542546 systemd-logind[790]: Removed session 13.
Dec  2 11:28:43 np0005542546 systemd-logind[790]: New session 14 of user zuul.
Dec  2 11:28:43 np0005542546 systemd[1]: Started Session 14 of User zuul.
Dec  2 11:28:44 np0005542546 python3.9[66125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:28:45 np0005542546 python3.9[66281]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:46 np0005542546 python3.9[66456]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:46 np0005542546 python3.9[66534]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.hc2z0_ed recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:47 np0005542546 python3.9[66686]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:48 np0005542546 python3.9[66809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692926.9733386-61-216642375694872/.source _original_basename=._5z9qel8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:48 np0005542546 python3.9[66961]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:49 np0005542546 python3.9[67113]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:50 np0005542546 python3.9[67236]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764692928.9573631-85-250000454176469/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:50 np0005542546 python3.9[67388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:51 np0005542546 python3.9[67511]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764692930.1984088-85-28600409716805/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:28:52 np0005542546 python3.9[67663]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:52 np0005542546 python3.9[67815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:53 np0005542546 python3.9[67938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692932.3340921-122-77850605543342/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:53 np0005542546 python3.9[68090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:54 np0005542546 python3.9[68213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692933.5097482-137-7357611481009/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:55 np0005542546 python3.9[68365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:28:55 np0005542546 systemd[1]: Reloading.
Dec  2 11:28:55 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:28:55 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:28:55 np0005542546 systemd[1]: Reloading.
Dec  2 11:28:55 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:28:55 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:28:56 np0005542546 systemd[1]: Starting EDPM Container Shutdown...
Dec  2 11:28:56 np0005542546 systemd[1]: Finished EDPM Container Shutdown.
Dec  2 11:28:56 np0005542546 python3.9[68592]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:57 np0005542546 python3.9[68715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692936.30099-160-93634310628820/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:58 np0005542546 python3.9[68867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:28:58 np0005542546 python3.9[68990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692937.6170702-175-280557204655822/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:28:59 np0005542546 python3.9[69142]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:28:59 np0005542546 systemd[1]: Reloading.
Dec  2 11:28:59 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:28:59 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:28:59 np0005542546 systemd[1]: Reloading.
Dec  2 11:28:59 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:28:59 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:29:00 np0005542546 systemd[1]: Starting Create netns directory...
Dec  2 11:29:00 np0005542546 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 11:29:00 np0005542546 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 11:29:00 np0005542546 systemd[1]: Finished Create netns directory.
Dec  2 11:29:00 np0005542546 python3.9[69369]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:29:00 np0005542546 network[69386]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:29:00 np0005542546 network[69387]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:29:00 np0005542546 network[69388]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:29:04 np0005542546 python3.9[69650]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:29:04 np0005542546 systemd[1]: Reloading.
Dec  2 11:29:04 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:29:04 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:29:04 np0005542546 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  2 11:29:04 np0005542546 iptables.init[69690]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  2 11:29:05 np0005542546 iptables.init[69690]: iptables: Flushing firewall rules: [  OK  ]
Dec  2 11:29:05 np0005542546 systemd[1]: iptables.service: Deactivated successfully.
Dec  2 11:29:05 np0005542546 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  2 11:29:05 np0005542546 python3.9[69886]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:29:06 np0005542546 python3.9[70040]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:29:06 np0005542546 systemd[1]: Reloading.
Dec  2 11:29:06 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:29:06 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:29:06 np0005542546 systemd[1]: Starting Netfilter Tables...
Dec  2 11:29:06 np0005542546 systemd[1]: Finished Netfilter Tables.
Dec  2 11:29:07 np0005542546 python3.9[70232]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:08 np0005542546 python3.9[70385]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:09 np0005542546 python3.9[70510]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692948.133136-244-148032442116564/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:09 np0005542546 python3.9[70663]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:29:09 np0005542546 systemd[1]: Reloading OpenSSH server daemon...
Dec  2 11:29:09 np0005542546 systemd[1]: Reloaded OpenSSH server daemon.
Dec  2 11:29:10 np0005542546 python3.9[70819]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:11 np0005542546 python3.9[70971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:11 np0005542546 python3.9[71094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692950.798535-275-215290624010347/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:12 np0005542546 python3.9[71246]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 11:29:12 np0005542546 systemd[1]: Starting Time & Date Service...
Dec  2 11:29:12 np0005542546 systemd[1]: Started Time & Date Service.
Dec  2 11:29:13 np0005542546 python3.9[71402]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:14 np0005542546 python3.9[71554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:14 np0005542546 python3.9[71677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692953.646746-310-56639239860419/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:15 np0005542546 python3.9[71829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:15 np0005542546 python3.9[71952]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764692954.9107976-325-8245267278661/.source.yaml _original_basename=.vutpzfjx follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:16 np0005542546 python3.9[72104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:17 np0005542546 python3.9[72227]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692956.0556502-340-148749761088687/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:18 np0005542546 python3.9[72379]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:18 np0005542546 python3.9[72532]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:19 np0005542546 python3[72685]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 11:29:20 np0005542546 python3.9[72837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:20 np0005542546 python3.9[72960]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692959.7590523-379-8217237594086/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:21 np0005542546 python3.9[73112]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:22 np0005542546 python3.9[73235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692961.1497657-394-53170824027950/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:22 np0005542546 python3.9[73387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:23 np0005542546 python3.9[73510]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692962.4864855-409-25197228127954/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:24 np0005542546 python3.9[73662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:24 np0005542546 python3.9[73785]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692963.6475136-424-96621315173968/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:25 np0005542546 python3.9[73937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:29:25 np0005542546 python3.9[74060]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764692964.914564-439-208980979474723/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:26 np0005542546 python3.9[74212]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:27 np0005542546 python3.9[74364]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:28 np0005542546 python3.9[74523]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:28 np0005542546 python3.9[74676]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:29 np0005542546 python3.9[74828]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:30 np0005542546 python3.9[74980]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 11:29:30 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:29:30 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:29:31 np0005542546 python3.9[75134]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 11:29:31 np0005542546 systemd[1]: session-14.scope: Deactivated successfully.
Dec  2 11:29:31 np0005542546 systemd[1]: session-14.scope: Consumed 35.392s CPU time.
Dec  2 11:29:31 np0005542546 systemd-logind[790]: Session 14 logged out. Waiting for processes to exit.
Dec  2 11:29:31 np0005542546 systemd-logind[790]: Removed session 14.
Dec  2 11:29:37 np0005542546 systemd-logind[790]: New session 15 of user zuul.
Dec  2 11:29:37 np0005542546 systemd[1]: Started Session 15 of User zuul.
Dec  2 11:29:38 np0005542546 python3.9[75315]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  2 11:29:39 np0005542546 python3.9[75467]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:29:40 np0005542546 python3.9[75619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:29:41 np0005542546 python3.9[75771]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZfp15dSJ7KVbMx1Od/2w/Jsn1ki/WXmQUo7NFC2a/h3jUJWYp/IwT6Vok75nvUvXvjoVGuCUvE6N3ld7C/DeNPRpV+IQzNzCVfUyRZ+CgPlln/I4L+25LvmHpiW/dP4pXvg9QCYHhLfBDS6PQ9gYL2/TEXI1Ub2/w3a7+959hPH7y4yf9f66KNtdYO295aJMv+ls7Mol7gjTeFAHKslJr4RGVkXDsLXD9dxa+YI5iFDpt1Nda6VISGeTZjdd8fqw5qA6LGsmP1pLlUytPaTi9bMQCnh9q8+vI8+5y9f2KR5n5BAn5Znp4sZtlkmIk9qYbu0Iu6tZ7UfH4GliNvLUPzw35B0Q9RqwI/9TFkEG5MWPAHqfPRF7Q+yF87IXgYygPvuinXx/H8t1LjEa+8eKLevt8paywYtvsLz1nxiru/qhbhzUniJOG8e1mA9YbqNffLYC3C8YOkN/DZRgixiyJluW991aTrOlkOqTBOtNDYidz0MEZn6y3i/F4OSoI/hs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPTNVbpRTvHjXr/V28eqX6asBoZp85bNqPnv+7dP+1vs#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWA0fxoKtyCioQ1Qw+XRCmtRF8KTDhsEET+Dk66jzvTLPJnGCgApix9DQNXXtW1swzvX83q9BoxUXSSrYFyQ+Y=#012 create=True mode=0644 path=/tmp/ansible.t1gkq5v3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:42 np0005542546 python3.9[75923]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.t1gkq5v3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:42 np0005542546 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 11:29:43 np0005542546 python3.9[76079]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.t1gkq5v3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:43 np0005542546 systemd[1]: session-15.scope: Deactivated successfully.
Dec  2 11:29:43 np0005542546 systemd[1]: session-15.scope: Consumed 3.731s CPU time.
Dec  2 11:29:43 np0005542546 systemd-logind[790]: Session 15 logged out. Waiting for processes to exit.
Dec  2 11:29:43 np0005542546 systemd-logind[790]: Removed session 15.
Dec  2 11:29:48 np0005542546 systemd-logind[790]: New session 16 of user zuul.
Dec  2 11:29:48 np0005542546 systemd[1]: Started Session 16 of User zuul.
Dec  2 11:29:49 np0005542546 python3.9[76260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:29:50 np0005542546 python3.9[76416]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 11:29:51 np0005542546 python3.9[76570]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:29:52 np0005542546 python3.9[76724]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:53 np0005542546 python3.9[76877]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:29:54 np0005542546 python3.9[77031]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:29:55 np0005542546 python3.9[77186]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:29:55 np0005542546 systemd-logind[790]: Session 16 logged out. Waiting for processes to exit.
Dec  2 11:29:55 np0005542546 systemd[1]: session-16.scope: Deactivated successfully.
Dec  2 11:29:55 np0005542546 systemd[1]: session-16.scope: Consumed 4.725s CPU time.
Dec  2 11:29:55 np0005542546 systemd-logind[790]: Removed session 16.
Dec  2 11:30:00 np0005542546 systemd-logind[790]: New session 17 of user zuul.
Dec  2 11:30:00 np0005542546 systemd[1]: Started Session 17 of User zuul.
Dec  2 11:30:01 np0005542546 python3.9[77364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:30:02 np0005542546 python3.9[77520]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:30:03 np0005542546 python3.9[77604]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 11:30:06 np0005542546 python3.9[77755]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:30:07 np0005542546 python3.9[77906]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:30:08 np0005542546 python3.9[78056]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:30:08 np0005542546 python3.9[78206]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:30:09 np0005542546 systemd[1]: session-17.scope: Deactivated successfully.
Dec  2 11:30:09 np0005542546 systemd[1]: session-17.scope: Consumed 6.057s CPU time.
Dec  2 11:30:09 np0005542546 systemd-logind[790]: Session 17 logged out. Waiting for processes to exit.
Dec  2 11:30:09 np0005542546 systemd-logind[790]: Removed session 17.
Dec  2 11:30:14 np0005542546 systemd-logind[790]: New session 18 of user zuul.
Dec  2 11:30:14 np0005542546 systemd[1]: Started Session 18 of User zuul.
Dec  2 11:30:15 np0005542546 python3.9[78384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:30:17 np0005542546 python3.9[78540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:18 np0005542546 python3.9[78692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:19 np0005542546 python3.9[78844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:20 np0005542546 python3.9[78967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693018.667257-65-44556390012178/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7b3e2e7ebc80a918a69bfbde4c16db3dd99cbcc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:20 np0005542546 python3.9[79119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:21 np0005542546 python3.9[79242]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693020.4001079-65-250538670879468/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a0760a692414c097e402e3fe9f0e8b54455b1d04 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:22 np0005542546 python3.9[79394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:22 np0005542546 python3.9[79517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693021.66764-65-210890238360875/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=84211bb662723bf49761d348c8268a31a1878fa1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:23 np0005542546 python3.9[79669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:24 np0005542546 python3.9[79821]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:25 np0005542546 python3.9[79973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:25 np0005542546 python3.9[80096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693024.5645304-124-86792658720449/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=22ec5f38604bfd8df76bdd3b867680b9cf49de08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:26 np0005542546 python3.9[80248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:26 np0005542546 python3.9[80371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693025.7823274-124-168710074786622/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a0760a692414c097e402e3fe9f0e8b54455b1d04 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:27 np0005542546 python3.9[80523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:28 np0005542546 python3.9[80646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693027.204559-124-139630609038573/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f8472dd8a3164f7e4d02eee009a68c2db9ec600a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:28 np0005542546 python3.9[80798]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:29 np0005542546 python3.9[80950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:30 np0005542546 python3.9[81102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:30 np0005542546 python3.9[81225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693029.8249638-183-78823544847574/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2fd1d02bcb0e1028149cb3e3ee9ccae7708662a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:31 np0005542546 python3.9[81377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:32 np0005542546 python3.9[81500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693031.1555285-183-273705483761802/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c3135e7392e1b19921f819ae17f4a849dd96ef4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:32 np0005542546 python3.9[81652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:33 np0005542546 python3.9[81775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693032.3344138-183-34709137391799/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=94ca4cc5a76e9a0888726f40185183413093baa3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:34 np0005542546 python3.9[81927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:34 np0005542546 python3.9[82079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:35 np0005542546 python3.9[82231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:36 np0005542546 python3.9[82354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693035.1205509-242-267194112735239/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9220cbcb175ce200ce5c421b2806062673291926 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:36 np0005542546 python3.9[82506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:37 np0005542546 python3.9[82629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693036.3743112-242-247097949492673/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=63fa26e88c03936f5f85931611e38bd847acbbae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:38 np0005542546 python3.9[82781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:38 np0005542546 python3.9[82904]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693037.5876758-242-20079084596156/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=50becf0b137f1643c51a7500058b003b6ba07641 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:39 np0005542546 python3.9[83056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:40 np0005542546 python3.9[83208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:40 np0005542546 python3.9[83360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:41 np0005542546 python3.9[83483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693040.1932378-301-106620477933064/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c5657c04a44cebae5c9c7f04f9207db08284e8bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:41 np0005542546 python3.9[83635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:42 np0005542546 python3.9[83758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693041.4552834-301-273274987748487/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c3135e7392e1b19921f819ae17f4a849dd96ef4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:43 np0005542546 python3.9[83910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:43 np0005542546 python3.9[84033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693042.6176884-301-131978146556805/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d800dc6935a924b77f9d19a01e3883cfad8dc7d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:44 np0005542546 chronyd[65945]: Selected source 54.39.23.64 (pool.ntp.org)
Dec  2 11:30:44 np0005542546 python3.9[84185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:45 np0005542546 python3.9[84337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:46 np0005542546 python3.9[84460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693045.1344779-369-173962034663605/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:46 np0005542546 python3.9[84612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:47 np0005542546 python3.9[84764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:47 np0005542546 python3.9[84887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693046.9503436-393-156512684028620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:48 np0005542546 python3.9[85039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:49 np0005542546 python3.9[85191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:50 np0005542546 python3.9[85314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693048.8530562-417-51834901904300/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:50 np0005542546 python3.9[85466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:51 np0005542546 python3.9[85618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:52 np0005542546 python3.9[85741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693050.929437-441-124945657189895/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:52 np0005542546 python3.9[85893]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:53 np0005542546 python3.9[86045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:53 np0005542546 python3.9[86168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693052.8021805-465-177614792762048/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:54 np0005542546 python3.9[86320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:55 np0005542546 python3.9[86472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:55 np0005542546 python3.9[86595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693054.7897828-489-130066639723350/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:56 np0005542546 python3.9[86747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:57 np0005542546 python3.9[86899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:57 np0005542546 python3.9[87022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693056.655561-513-255528089426696/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:30:58 np0005542546 python3.9[87174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:30:59 np0005542546 python3.9[87326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:30:59 np0005542546 python3.9[87449]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693058.6814342-537-225081093810800/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=218cb6a4bef8ebdc690d8818c8f05532e3b88133 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:00 np0005542546 systemd[1]: session-18.scope: Deactivated successfully.
Dec  2 11:31:00 np0005542546 systemd[1]: session-18.scope: Consumed 35.321s CPU time.
Dec  2 11:31:00 np0005542546 systemd-logind[790]: Session 18 logged out. Waiting for processes to exit.
Dec  2 11:31:00 np0005542546 systemd-logind[790]: Removed session 18.
Dec  2 11:31:05 np0005542546 systemd-logind[790]: New session 19 of user zuul.
Dec  2 11:31:05 np0005542546 systemd[1]: Started Session 19 of User zuul.
Dec  2 11:31:06 np0005542546 python3.9[87629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:31:07 np0005542546 python3.9[87785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:08 np0005542546 python3.9[87937]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:09 np0005542546 python3.9[88087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:31:10 np0005542546 python3.9[88239]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  2 11:31:11 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  2 11:31:12 np0005542546 python3.9[88395]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:31:13 np0005542546 python3.9[88479]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:31:15 np0005542546 python3.9[88632]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:31:16 np0005542546 python3[88787]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  2 11:31:17 np0005542546 python3.9[88940]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:18 np0005542546 python3.9[89092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:18 np0005542546 python3.9[89170]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:19 np0005542546 python3.9[89322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:19 np0005542546 python3.9[89400]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xcmvnl7q recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:20 np0005542546 python3.9[89552]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:21 np0005542546 python3.9[89630]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:22 np0005542546 python3.9[89782]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:22 np0005542546 python3[89935]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 11:31:23 np0005542546 python3.9[90087]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:24 np0005542546 python3.9[90212]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693083.0038075-157-98917157216845/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:25 np0005542546 python3.9[90364]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:25 np0005542546 python3.9[90489]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693084.541813-172-30376740901720/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:26 np0005542546 python3.9[90641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:26 np0005542546 python3.9[90766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693085.8203545-187-272953478484789/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:27 np0005542546 python3.9[90918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:28 np0005542546 python3.9[91043]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693087.0923069-202-32532740284640/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:29 np0005542546 python3.9[91195]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:29 np0005542546 python3.9[91320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693088.5082014-217-229057382776449/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:30 np0005542546 python3.9[91472]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:30 np0005542546 python3.9[91624]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:31 np0005542546 python3.9[91779]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:32 np0005542546 python3.9[91932]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:33 np0005542546 python3.9[92085]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:31:33 np0005542546 python3.9[92239]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:34 np0005542546 python3.9[92394]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:35 np0005542546 python3.9[92544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:31:36 np0005542546 python3.9[92697]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:36 np0005542546 ovs-vsctl[92698]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  2 11:31:37 np0005542546 python3.9[92850]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:38 np0005542546 python3.9[93005]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:31:38 np0005542546 ovs-vsctl[93006]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  2 11:31:38 np0005542546 python3.9[93156]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:31:39 np0005542546 python3.9[93310]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:40 np0005542546 python3.9[93462]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:40 np0005542546 python3.9[93540]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:41 np0005542546 python3.9[93692]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:41 np0005542546 python3.9[93770]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:42 np0005542546 python3.9[93922]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:43 np0005542546 python3.9[94074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:43 np0005542546 python3.9[94152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:44 np0005542546 python3.9[94304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:45 np0005542546 python3.9[94382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:45 np0005542546 python3.9[94534]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:31:45 np0005542546 systemd[1]: Reloading.
Dec  2 11:31:45 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:31:45 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:31:46 np0005542546 python3.9[94723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:47 np0005542546 python3.9[94801]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:47 np0005542546 python3.9[94953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:48 np0005542546 python3.9[95031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:49 np0005542546 python3.9[95183]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:31:49 np0005542546 systemd[1]: Reloading.
Dec  2 11:31:49 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:31:49 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:31:49 np0005542546 systemd[1]: Starting Create netns directory...
Dec  2 11:31:49 np0005542546 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 11:31:49 np0005542546 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 11:31:49 np0005542546 systemd[1]: Finished Create netns directory.
Dec  2 11:31:50 np0005542546 python3.9[95376]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:51 np0005542546 python3.9[95528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:51 np0005542546 python3.9[95651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693110.5497873-468-3028803949747/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:52 np0005542546 python3.9[95803]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:31:53 np0005542546 python3.9[95955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:31:53 np0005542546 python3.9[96078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693112.7165048-493-69869359495413/.source.json _original_basename=.dq78hwah follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:54 np0005542546 python3.9[96230]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:31:56 np0005542546 python3.9[96657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  2 11:31:57 np0005542546 python3.9[96809]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:31:58 np0005542546 python3.9[96961]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 11:31:58 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:31:59 np0005542546 python3[97124]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:31:59 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:32:00 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:32:00 np0005542546 podman[97162]: 2025-12-02 16:32:00.106371146 +0000 UTC m=+0.056069107 container create 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 11:32:00 np0005542546 podman[97162]: 2025-12-02 16:32:00.075469194 +0000 UTC m=+0.025167195 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 11:32:00 np0005542546 python3[97124]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 11:32:00 np0005542546 python3.9[97352]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:32:00 np0005542546 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 11:32:01 np0005542546 python3.9[97506]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:02 np0005542546 python3.9[97582]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:32:02 np0005542546 python3.9[97733]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693122.150758-581-14117523950217/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:03 np0005542546 python3.9[97809]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:32:03 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:03 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:03 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:04 np0005542546 python3.9[97919]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:32:04 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:04 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:04 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:04 np0005542546 systemd[1]: Starting ovn_controller container...
Dec  2 11:32:04 np0005542546 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  2 11:32:04 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:32:04 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e95d7fa926b8ca81b4e6b4fb6be10b74e8c3e1d3ed35bacd183d3764e5ae57/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 11:32:04 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.
Dec  2 11:32:04 np0005542546 podman[97960]: 2025-12-02 16:32:04.683573301 +0000 UTC m=+0.148644167 container init 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 11:32:04 np0005542546 ovn_controller[97975]: + sudo -E kolla_set_configs
Dec  2 11:32:04 np0005542546 podman[97960]: 2025-12-02 16:32:04.720817168 +0000 UTC m=+0.185887974 container start 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 11:32:04 np0005542546 edpm-start-podman-container[97960]: ovn_controller
Dec  2 11:32:04 np0005542546 systemd[1]: Created slice User Slice of UID 0.
Dec  2 11:32:04 np0005542546 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  2 11:32:04 np0005542546 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  2 11:32:04 np0005542546 systemd[1]: Starting User Manager for UID 0...
Dec  2 11:32:04 np0005542546 edpm-start-podman-container[97959]: Creating additional drop-in dependency for "ovn_controller" (38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb)
Dec  2 11:32:04 np0005542546 podman[97982]: 2025-12-02 16:32:04.815203373 +0000 UTC m=+0.080283802 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:32:04 np0005542546 systemd[1]: 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb-55575cf97ed1aacc.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:32:04 np0005542546 systemd[1]: 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb-55575cf97ed1aacc.service: Failed with result 'exit-code'.
Dec  2 11:32:04 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:04 np0005542546 systemd[98005]: Queued start job for default target Main User Target.
Dec  2 11:32:04 np0005542546 systemd[98005]: Created slice User Application Slice.
Dec  2 11:32:04 np0005542546 systemd[98005]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  2 11:32:04 np0005542546 systemd[98005]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 11:32:04 np0005542546 systemd[98005]: Reached target Paths.
Dec  2 11:32:04 np0005542546 systemd[98005]: Reached target Timers.
Dec  2 11:32:04 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:04 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:04 np0005542546 systemd[98005]: Starting D-Bus User Message Bus Socket...
Dec  2 11:32:04 np0005542546 systemd[98005]: Starting Create User's Volatile Files and Directories...
Dec  2 11:32:04 np0005542546 systemd[98005]: Listening on D-Bus User Message Bus Socket.
Dec  2 11:32:04 np0005542546 systemd[98005]: Finished Create User's Volatile Files and Directories.
Dec  2 11:32:04 np0005542546 systemd[98005]: Reached target Sockets.
Dec  2 11:32:04 np0005542546 systemd[98005]: Reached target Basic System.
Dec  2 11:32:04 np0005542546 systemd[98005]: Reached target Main User Target.
Dec  2 11:32:04 np0005542546 systemd[98005]: Startup finished in 123ms.
Dec  2 11:32:05 np0005542546 systemd[1]: Started User Manager for UID 0.
Dec  2 11:32:05 np0005542546 systemd[1]: Started ovn_controller container.
Dec  2 11:32:05 np0005542546 systemd[1]: Started Session c1 of User root.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: INFO:__main__:Validating config file
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: INFO:__main__:Writing out command to execute
Dec  2 11:32:05 np0005542546 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: ++ cat /run_command
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + ARGS=
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + sudo kolla_copy_cacerts
Dec  2 11:32:05 np0005542546 systemd[1]: Started Session c2 of User root.
Dec  2 11:32:05 np0005542546 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + [[ ! -n '' ]]
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + . kolla_extend_start
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + umask 0022
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2226] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2235] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2250] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2257] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2262] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  2 11:32:05 np0005542546 kernel: br-int: entered promiscuous mode
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2474] manager: (ovn-d98a9d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  2 11:32:05 np0005542546 kernel: genev_sys_6081: entered promiscuous mode
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2681] device (genev_sys_6081): carrier: link connected
Dec  2 11:32:05 np0005542546 NetworkManager[56503]: <info>  [1764693125.2684] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec  2 11:32:05 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:05Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 11:32:05 np0005542546 systemd-udevd[98132]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 11:32:05 np0005542546 systemd-udevd[98137]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 11:32:05 np0005542546 python3.9[98243]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:32:05 np0005542546 ovs-vsctl[98244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  2 11:32:06 np0005542546 python3.9[98396]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:32:06 np0005542546 ovs-vsctl[98398]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  2 11:32:07 np0005542546 python3.9[98551]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:32:07 np0005542546 ovs-vsctl[98552]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  2 11:32:07 np0005542546 systemd[1]: session-19.scope: Deactivated successfully.
Dec  2 11:32:07 np0005542546 systemd[1]: session-19.scope: Consumed 46.605s CPU time.
Dec  2 11:32:07 np0005542546 systemd-logind[790]: Session 19 logged out. Waiting for processes to exit.
Dec  2 11:32:07 np0005542546 systemd-logind[790]: Removed session 19.
Dec  2 11:32:13 np0005542546 systemd-logind[790]: New session 21 of user zuul.
Dec  2 11:32:13 np0005542546 systemd[1]: Started Session 21 of User zuul.
Dec  2 11:32:14 np0005542546 python3.9[98730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:32:15 np0005542546 python3.9[98886]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:15 np0005542546 systemd[1]: Stopping User Manager for UID 0...
Dec  2 11:32:15 np0005542546 systemd[98005]: Activating special unit Exit the Session...
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped target Main User Target.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped target Basic System.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped target Paths.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped target Sockets.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped target Timers.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  2 11:32:15 np0005542546 systemd[98005]: Closed D-Bus User Message Bus Socket.
Dec  2 11:32:15 np0005542546 systemd[98005]: Stopped Create User's Volatile Files and Directories.
Dec  2 11:32:15 np0005542546 systemd[98005]: Removed slice User Application Slice.
Dec  2 11:32:15 np0005542546 systemd[98005]: Reached target Shutdown.
Dec  2 11:32:15 np0005542546 systemd[98005]: Finished Exit the Session.
Dec  2 11:32:15 np0005542546 systemd[98005]: Reached target Exit the Session.
Dec  2 11:32:15 np0005542546 systemd[1]: user@0.service: Deactivated successfully.
Dec  2 11:32:15 np0005542546 systemd[1]: Stopped User Manager for UID 0.
Dec  2 11:32:15 np0005542546 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  2 11:32:15 np0005542546 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  2 11:32:15 np0005542546 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  2 11:32:15 np0005542546 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  2 11:32:15 np0005542546 systemd[1]: Removed slice User Slice of UID 0.
Dec  2 11:32:16 np0005542546 python3.9[99040]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:16 np0005542546 python3.9[99192]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:17 np0005542546 python3.9[99344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:17 np0005542546 python3.9[99496]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:18 np0005542546 python3.9[99646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:32:19 np0005542546 python3.9[99798]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  2 11:32:20 np0005542546 python3.9[99948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:21 np0005542546 python3.9[100069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693140.3752408-86-246358640247636/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:22 np0005542546 python3.9[100219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:22 np0005542546 python3.9[100340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693141.8475244-101-200649581468878/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:23 np0005542546 python3.9[100493]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:32:24 np0005542546 python3.9[100579]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:32:27 np0005542546 python3.9[100732]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:32:27 np0005542546 python3.9[100885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:28 np0005542546 python3.9[101006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693147.3214343-138-205638347432428/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:29 np0005542546 python3.9[101156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:29 np0005542546 python3.9[101277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693148.6385949-138-218269949525693/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:30 np0005542546 python3.9[101427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:31 np0005542546 python3.9[101548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693150.343443-182-151157440807274/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:31 np0005542546 python3.9[101698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:32 np0005542546 python3.9[101819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693151.4518485-182-263397353387713/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:32 np0005542546 python3.9[101969]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:32:33 np0005542546 python3.9[102123]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:34 np0005542546 python3.9[102275]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:34 np0005542546 python3.9[102353]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:35 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:35Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Dec  2 11:32:35 np0005542546 ovn_controller[97975]: 2025-12-02T16:32:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  2 11:32:35 np0005542546 podman[102477]: 2025-12-02 16:32:35.150164512 +0000 UTC m=+0.117243362 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 11:32:35 np0005542546 python3.9[102524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:36 np0005542546 python3.9[102609]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:36 np0005542546 python3.9[102761]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:37 np0005542546 python3.9[102913]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:38 np0005542546 python3.9[102991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:38 np0005542546 python3.9[103143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:39 np0005542546 python3.9[103221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:40 np0005542546 python3.9[103373]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:32:40 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:40 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:40 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:41 np0005542546 python3.9[103562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:41 np0005542546 python3.9[103640]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:42 np0005542546 python3.9[103792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:42 np0005542546 python3.9[103870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:43 np0005542546 python3.9[104022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:32:43 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:43 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:43 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:43 np0005542546 systemd[1]: Starting Create netns directory...
Dec  2 11:32:43 np0005542546 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 11:32:43 np0005542546 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 11:32:43 np0005542546 systemd[1]: Finished Create netns directory.
Dec  2 11:32:44 np0005542546 python3.9[104214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:45 np0005542546 python3.9[104366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:45 np0005542546 python3.9[104489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693164.7156653-333-84729270987086/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:46 np0005542546 python3.9[104641]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:32:47 np0005542546 python3.9[104793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:32:48 np0005542546 python3.9[104916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693166.9403968-358-40721665924301/.source.json _original_basename=.lmt10oh1 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:48 np0005542546 python3.9[105068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:51 np0005542546 python3.9[105495]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  2 11:32:51 np0005542546 python3.9[105647]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:32:52 np0005542546 python3.9[105799]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 11:32:54 np0005542546 python3[105977]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:32:54 np0005542546 podman[106016]: 2025-12-02 16:32:54.298343567 +0000 UTC m=+0.054684851 container create d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 11:32:54 np0005542546 podman[106016]: 2025-12-02 16:32:54.273544325 +0000 UTC m=+0.029885639 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 11:32:54 np0005542546 python3[105977]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 11:32:55 np0005542546 python3.9[106206]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:32:55 np0005542546 python3.9[106360]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:56 np0005542546 python3.9[106436]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:32:57 np0005542546 python3.9[106587]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693176.469839-446-138031422305461/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:32:57 np0005542546 python3.9[106663]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:32:57 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:57 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:57 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:58 np0005542546 python3.9[106773]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:32:59 np0005542546 systemd[1]: Reloading.
Dec  2 11:32:59 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:32:59 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:32:59 np0005542546 systemd[1]: Starting ovn_metadata_agent container...
Dec  2 11:32:59 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:32:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfedd64080966c0eb0df629ee2c5caef9a03ddb33749f1539ba1938f0a0f286/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  2 11:32:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfedd64080966c0eb0df629ee2c5caef9a03ddb33749f1539ba1938f0a0f286/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 11:32:59 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.
Dec  2 11:32:59 np0005542546 podman[106814]: 2025-12-02 16:32:59.91518801 +0000 UTC m=+0.148450219 container init d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 11:32:59 np0005542546 ovn_metadata_agent[106830]: + sudo -E kolla_set_configs
Dec  2 11:32:59 np0005542546 podman[106814]: 2025-12-02 16:32:59.938911916 +0000 UTC m=+0.172174095 container start d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 11:32:59 np0005542546 edpm-start-podman-container[106814]: ovn_metadata_agent
Dec  2 11:32:59 np0005542546 podman[106836]: 2025-12-02 16:32:59.997636987 +0000 UTC m=+0.048919683 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  2 11:33:00 np0005542546 edpm-start-podman-container[106813]: Creating additional drop-in dependency for "ovn_metadata_agent" (d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942)
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Validating config file
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Copying service configuration files
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Writing out command to execute
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: ++ cat /run_command
Dec  2 11:33:00 np0005542546 systemd[1]: Reloading.
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + CMD=neutron-ovn-metadata-agent
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + ARGS=
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + sudo kolla_copy_cacerts
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + [[ ! -n '' ]]
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + . kolla_extend_start
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: Running command: 'neutron-ovn-metadata-agent'
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + umask 0022
Dec  2 11:33:00 np0005542546 ovn_metadata_agent[106830]: + exec neutron-ovn-metadata-agent
Dec  2 11:33:00 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:33:00 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:33:00 np0005542546 systemd[1]: Started ovn_metadata_agent container.
Dec  2 11:33:00 np0005542546 systemd[1]: session-21.scope: Deactivated successfully.
Dec  2 11:33:00 np0005542546 systemd[1]: session-21.scope: Consumed 34.261s CPU time.
Dec  2 11:33:00 np0005542546 systemd-logind[790]: Session 21 logged out. Waiting for processes to exit.
Dec  2 11:33:00 np0005542546 systemd-logind[790]: Removed session 21.
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.789 106835 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.791 106835 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.792 106835 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.793 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.793 106835 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.793 106835 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.793 106835 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.794 106835 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.795 106835 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.796 106835 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.797 106835 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.798 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.799 106835 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.800 106835 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.801 106835 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.802 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.803 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.804 106835 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.805 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.806 106835 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.807 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.808 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.809 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.810 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.811 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.812 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.813 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.814 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.815 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.816 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.817 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.818 106835 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.819 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.820 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.821 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.822 106835 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.823 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.824 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.825 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.826 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.827 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.828 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.829 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.830 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.831 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.831 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.831 106835 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.831 106835 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.841 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.842 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.842 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.842 106835 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.843 106835 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.856 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 000c10a1-5e88-4874-8132-a124d4da5271 (UUID: 000c10a1-5e88-4874-8132-a124d4da5271) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.886 106835 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.887 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.887 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.887 106835 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.891 106835 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.896 106835 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.901 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '000c10a1-5e88-4874-8132-a124d4da5271'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], external_ids={}, name=000c10a1-5e88-4874-8132-a124d4da5271, nb_cfg_timestamp=1764693133252, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.902 106835 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fdd566bf160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.903 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.904 106835 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.910 106835 DEBUG oslo_service.service [-] Started child 106942 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.915 106835 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwg4lbh7g/privsep.sock']#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.917 106942 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1017662'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.948 106942 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.948 106942 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.948 106942 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.952 106942 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.959 106942 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  2 11:33:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:01.965 106942 INFO eventlet.wsgi.server [-] (106942) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  2 11:33:02 np0005542546 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.616 106835 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.617 106835 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwg4lbh7g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.502 106947 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.506 106947 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.508 106947 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.508 106947 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106947#033[00m
Dec  2 11:33:02 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:02.620 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[e902989c-1857-45c8-a342-6f533d868f81]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.145 106947 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.145 106947 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.145 106947 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.723 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[b942c3e9-7252-4eb5-9303-5d235c125b66]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.726 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, column=external_ids, values=({'neutron:ovn-metadata-id': '3909373b-720a-51eb-a369-8c328f1ebb75'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.767 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.773 106835 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.773 106835 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.773 106835 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.773 106835 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.773 106835 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.774 106835 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.775 106835 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.776 106835 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.777 106835 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.778 106835 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.779 106835 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.780 106835 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.781 106835 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.782 106835 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.783 106835 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.784 106835 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.785 106835 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.786 106835 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.787 106835 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.788 106835 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.789 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.790 106835 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.791 106835 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.792 106835 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.793 106835 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.794 106835 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.795 106835 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.796 106835 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.797 106835 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.798 106835 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.799 106835 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.800 106835 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.801 106835 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.802 106835 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.803 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.804 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.805 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.806 106835 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.807 106835 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.807 106835 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:33:03 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:33:03.807 106835 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 11:33:05 np0005542546 systemd-logind[790]: New session 22 of user zuul.
Dec  2 11:33:05 np0005542546 systemd[1]: Started Session 22 of User zuul.
Dec  2 11:33:05 np0005542546 podman[106954]: 2025-12-02 16:33:05.965312002 +0000 UTC m=+0.119530196 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:33:06 np0005542546 python3.9[107131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:33:08 np0005542546 python3.9[107287]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:09 np0005542546 python3.9[107452]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:33:09 np0005542546 systemd[1]: Reloading.
Dec  2 11:33:09 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:33:09 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:33:10 np0005542546 python3.9[107636]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:33:10 np0005542546 network[107653]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:33:10 np0005542546 network[107654]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:33:10 np0005542546 network[107655]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:33:14 np0005542546 python3.9[107916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:15 np0005542546 python3.9[108069]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:16 np0005542546 python3.9[108222]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:17 np0005542546 python3.9[108375]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:17 np0005542546 python3.9[108528]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:18 np0005542546 python3.9[108681]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:19 np0005542546 python3.9[108834]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:33:20 np0005542546 python3.9[108987]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:21 np0005542546 python3.9[109139]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:21 np0005542546 python3.9[109291]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:22 np0005542546 python3.9[109443]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:22 np0005542546 python3.9[109595]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:23 np0005542546 python3.9[109747]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:23 np0005542546 python3.9[109899]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:24 np0005542546 python3.9[110051]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:25 np0005542546 python3.9[110203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:25 np0005542546 python3.9[110355]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:26 np0005542546 python3.9[110507]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:27 np0005542546 python3.9[110659]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:27 np0005542546 python3.9[110811]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:28 np0005542546 python3.9[110963]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:33:28 np0005542546 python3.9[111115]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:29 np0005542546 python3.9[111267]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:33:30 np0005542546 podman[111367]: 2025-12-02 16:33:30.274329233 +0000 UTC m=+0.083724901 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  2 11:33:30 np0005542546 python3.9[111438]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:33:30 np0005542546 systemd[1]: Reloading.
Dec  2 11:33:30 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:33:30 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:33:31 np0005542546 python3.9[111625]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:32 np0005542546 python3.9[111778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:32 np0005542546 python3.9[111931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:33 np0005542546 python3.9[112084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:34 np0005542546 python3.9[112237]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:35 np0005542546 python3.9[112390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:36 np0005542546 podman[112515]: 2025-12-02 16:33:36.250268096 +0000 UTC m=+0.133979208 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 11:33:36 np0005542546 python3.9[112563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:33:37 np0005542546 python3.9[112724]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  2 11:33:38 np0005542546 python3.9[112877]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:33:39 np0005542546 python3.9[113035]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 11:33:40 np0005542546 python3.9[113195]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:33:41 np0005542546 python3.9[113279]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:34:01 np0005542546 podman[113472]: 2025-12-02 16:34:01.257119026 +0000 UTC m=+0.078194229 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 11:34:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:34:01.834 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:34:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:34:01.845 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:34:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:34:01.846 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:34:07 np0005542546 podman[113495]: 2025-12-02 16:34:07.273184752 +0000 UTC m=+0.102124205 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 11:34:07 np0005542546 kernel: SELinux:  Converting 2758 SID table entries...
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:34:07 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  Converting 2758 SID table entries...
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:34:17 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:34:32 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  2 11:34:32 np0005542546 podman[115431]: 2025-12-02 16:34:32.281159523 +0000 UTC m=+0.094337949 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  2 11:34:38 np0005542546 podman[119092]: 2025-12-02 16:34:38.276001576 +0000 UTC m=+0.110181350 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 11:35:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:35:01.836 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:35:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:35:01.837 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:35:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:35:01.838 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:35:03 np0005542546 podman[130386]: 2025-12-02 16:35:03.223663837 +0000 UTC m=+0.059928120 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 11:35:09 np0005542546 kernel: SELinux:  Converting 2759 SID table entries...
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability open_perms=1
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability always_check_network=0
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 11:35:09 np0005542546 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 11:35:09 np0005542546 podman[130412]: 2025-12-02 16:35:09.251957231 +0000 UTC m=+0.089310250 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 11:35:10 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:35:10 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  2 11:35:10 np0005542546 dbus-broker-launch[753]: Noticed file-system modification, trigger reload.
Dec  2 11:35:17 np0005542546 systemd[1]: Stopping OpenSSH server daemon...
Dec  2 11:35:17 np0005542546 systemd[1]: sshd.service: Deactivated successfully.
Dec  2 11:35:17 np0005542546 systemd[1]: Stopped OpenSSH server daemon.
Dec  2 11:35:17 np0005542546 systemd[1]: sshd.service: Consumed 4.297s CPU time, read 32.0K from disk, written 56.0K to disk.
Dec  2 11:35:17 np0005542546 systemd[1]: Stopped target sshd-keygen.target.
Dec  2 11:35:17 np0005542546 systemd[1]: Stopping sshd-keygen.target...
Dec  2 11:35:17 np0005542546 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 11:35:17 np0005542546 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 11:35:17 np0005542546 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 11:35:17 np0005542546 systemd[1]: Reached target sshd-keygen.target.
Dec  2 11:35:17 np0005542546 systemd[1]: Starting OpenSSH server daemon...
Dec  2 11:35:17 np0005542546 systemd[1]: Started OpenSSH server daemon.
Dec  2 11:35:19 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:35:19 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:35:19 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:19 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:19 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:19 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:35:24 np0005542546 python3.9[136927]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:35:24 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:24 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:24 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:25 np0005542546 python3.9[138114]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:35:26 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:26 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:26 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:27 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:35:27 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:35:27 np0005542546 systemd[1]: man-db-cache-update.service: Consumed 9.807s CPU time.
Dec  2 11:35:27 np0005542546 systemd[1]: run-rf36cbaef934f417ea3f326fb4c8cdddb.service: Deactivated successfully.
Dec  2 11:35:27 np0005542546 python3.9[140323]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:35:27 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:27 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:27 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:28 np0005542546 python3.9[140515]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:35:28 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:28 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:28 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:29 np0005542546 python3.9[140705]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:29 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:30 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:30 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:30 np0005542546 python3.9[140895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:32 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:32 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:32 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:33 np0005542546 python3.9[141084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:33 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:33 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:33 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:33 np0005542546 podman[141124]: 2025-12-02 16:35:33.549474649 +0000 UTC m=+0.064211719 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 11:35:34 np0005542546 python3.9[141295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:35 np0005542546 python3.9[141450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:35 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:35 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:35 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:36 np0005542546 python3.9[141640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 11:35:36 np0005542546 systemd[1]: Reloading.
Dec  2 11:35:36 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:35:36 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:35:36 np0005542546 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  2 11:35:36 np0005542546 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  2 11:35:37 np0005542546 python3.9[141832]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:38 np0005542546 python3.9[141987]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:39 np0005542546 python3.9[142142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:39 np0005542546 podman[142269]: 2025-12-02 16:35:39.608840861 +0000 UTC m=+0.121524692 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  2 11:35:39 np0005542546 python3.9[142312]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:40 np0005542546 python3.9[142477]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:41 np0005542546 python3.9[142632]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:42 np0005542546 python3.9[142787]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:43 np0005542546 python3.9[142942]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:43 np0005542546 python3.9[143097]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:44 np0005542546 python3.9[143252]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:45 np0005542546 python3.9[143407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:46 np0005542546 python3.9[143562]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:47 np0005542546 python3.9[143717]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:48 np0005542546 python3.9[143872]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 11:35:49 np0005542546 python3.9[144027]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:49 np0005542546 python3.9[144179]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:50 np0005542546 python3.9[144331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:50 np0005542546 python3.9[144483]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:51 np0005542546 python3.9[144635]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:52 np0005542546 python3.9[144787]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:35:53 np0005542546 python3.9[144939]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:35:53 np0005542546 python3.9[145064]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693352.4449713-554-176586249171831/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:35:54 np0005542546 python3.9[145216]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:35:55 np0005542546 python3.9[145341]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693354.1526418-554-262456407552266/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:35:55 np0005542546 python3.9[145493]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:35:56 np0005542546 python3.9[145618]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693355.4382033-554-73634922342789/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:35:57 np0005542546 python3.9[145770]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:35:58 np0005542546 python3.9[145895]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693356.732121-554-185127316866414/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:35:58 np0005542546 python3.9[146047]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:35:59 np0005542546 python3.9[146172]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693358.2193556-554-96822142599366/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:35:59 np0005542546 python3.9[146324]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:00 np0005542546 python3.9[146449]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693359.3552368-554-247489368225592/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:01 np0005542546 python3.9[146601]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:01 np0005542546 python3.9[146724]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693360.6124485-554-184402558797130/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:36:01.837 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:36:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:36:01.837 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:36:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:36:01.837 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:36:02 np0005542546 python3.9[146876]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:02 np0005542546 python3.9[147001]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764693361.7321682-554-40189153298231/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:03 np0005542546 python3.9[147153]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  2 11:36:03 np0005542546 podman[147278]: 2025-12-02 16:36:03.891271496 +0000 UTC m=+0.060587902 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  2 11:36:04 np0005542546 python3.9[147326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:04 np0005542546 python3.9[147478]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:05 np0005542546 python3.9[147630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:05 np0005542546 python3.9[147782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:06 np0005542546 python3.9[147934]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:07 np0005542546 python3.9[148086]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:07 np0005542546 python3.9[148238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:08 np0005542546 python3.9[148390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:09 np0005542546 python3.9[148542]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:09 np0005542546 podman[148694]: 2025-12-02 16:36:09.763148508 +0000 UTC m=+0.080984519 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  2 11:36:09 np0005542546 python3.9[148695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:10 np0005542546 python3.9[148874]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:11 np0005542546 python3.9[149028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:11 np0005542546 python3.9[149180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:12 np0005542546 python3.9[149332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:13 np0005542546 python3.9[149484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:13 np0005542546 python3.9[149607]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693372.4809651-775-268152239345418/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:14 np0005542546 python3.9[149759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:14 np0005542546 python3.9[149882]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693373.7259405-775-127095610166854/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:15 np0005542546 python3.9[150034]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:15 np0005542546 python3.9[150157]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693374.8854408-775-14748984255920/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:16 np0005542546 python3.9[150309]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:16 np0005542546 python3.9[150432]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693376.0246844-775-159393423077098/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:17 np0005542546 python3.9[150584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:18 np0005542546 python3.9[150707]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693377.100484-775-174792544445535/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:18 np0005542546 python3.9[150859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:19 np0005542546 python3.9[150982]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693378.2520359-775-123730732696223/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:19 np0005542546 python3.9[151134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:20 np0005542546 python3.9[151257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693379.3614323-775-183713241018560/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:20 np0005542546 python3.9[151409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:21 np0005542546 python3.9[151532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693380.4611664-775-126547472306460/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:22 np0005542546 python3.9[151684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:22 np0005542546 python3.9[151807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693381.6177795-775-229168431842163/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:23 np0005542546 python3.9[151959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:23 np0005542546 python3.9[152082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693382.7538018-775-40223363737572/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:24 np0005542546 python3.9[152234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:24 np0005542546 python3.9[152357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693383.934353-775-228276169916444/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:25 np0005542546 python3.9[152509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:26 np0005542546 python3.9[152632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693385.0528176-775-160140631181737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:26 np0005542546 python3.9[152784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:27 np0005542546 python3.9[152907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693386.2486212-775-188922009427263/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:28 np0005542546 python3.9[153059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:28 np0005542546 python3.9[153182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693387.4912052-775-152464040825537/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:29 np0005542546 python3.9[153332]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:36:30 np0005542546 python3.9[153487]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  2 11:36:31 np0005542546 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  2 11:36:32 np0005542546 python3.9[153643]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:32 np0005542546 python3.9[153795]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:33 np0005542546 python3.9[153947]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:34 np0005542546 python3.9[154099]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:34 np0005542546 podman[154100]: 2025-12-02 16:36:34.222641935 +0000 UTC m=+0.049050686 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 11:36:34 np0005542546 python3.9[154270]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:35 np0005542546 python3.9[154422]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:35 np0005542546 python3.9[154574]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:36 np0005542546 python3.9[154726]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:37 np0005542546 python3.9[154878]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:37 np0005542546 python3.9[155030]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:38 np0005542546 python3.9[155182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:36:38 np0005542546 systemd[1]: Reloading.
Dec  2 11:36:38 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:36:38 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:36:38 np0005542546 systemd[1]: Starting libvirt logging daemon socket...
Dec  2 11:36:38 np0005542546 systemd[1]: Listening on libvirt logging daemon socket.
Dec  2 11:36:39 np0005542546 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  2 11:36:39 np0005542546 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  2 11:36:39 np0005542546 systemd[1]: Starting libvirt logging daemon...
Dec  2 11:36:39 np0005542546 systemd[1]: Started libvirt logging daemon.
Dec  2 11:36:39 np0005542546 python3.9[155375]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:36:39 np0005542546 systemd[1]: Reloading.
Dec  2 11:36:40 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:36:40 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:36:40 np0005542546 podman[155377]: 2025-12-02 16:36:40.118857052 +0000 UTC m=+0.123353990 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  2 11:36:40 np0005542546 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  2 11:36:40 np0005542546 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  2 11:36:40 np0005542546 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  2 11:36:40 np0005542546 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  2 11:36:40 np0005542546 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  2 11:36:40 np0005542546 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  2 11:36:40 np0005542546 systemd[1]: Starting libvirt nodedev daemon...
Dec  2 11:36:40 np0005542546 systemd[1]: Started libvirt nodedev daemon.
Dec  2 11:36:41 np0005542546 python3.9[155619]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:36:41 np0005542546 systemd[1]: Reloading.
Dec  2 11:36:41 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:36:41 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:36:41 np0005542546 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  2 11:36:41 np0005542546 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  2 11:36:41 np0005542546 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  2 11:36:41 np0005542546 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  2 11:36:41 np0005542546 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  2 11:36:41 np0005542546 systemd[1]: Starting libvirt proxy daemon...
Dec  2 11:36:41 np0005542546 systemd[1]: Started libvirt proxy daemon.
Dec  2 11:36:41 np0005542546 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  2 11:36:41 np0005542546 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  2 11:36:41 np0005542546 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  2 11:36:42 np0005542546 python3.9[155837]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:36:42 np0005542546 systemd[1]: Reloading.
Dec  2 11:36:42 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:36:42 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:36:42 np0005542546 systemd[1]: Listening on libvirt locking daemon socket.
Dec  2 11:36:42 np0005542546 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  2 11:36:42 np0005542546 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  2 11:36:42 np0005542546 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  2 11:36:42 np0005542546 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  2 11:36:42 np0005542546 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  2 11:36:42 np0005542546 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  2 11:36:42 np0005542546 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  2 11:36:42 np0005542546 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  2 11:36:42 np0005542546 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  2 11:36:42 np0005542546 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 11:36:42 np0005542546 systemd[1]: Started libvirt QEMU daemon.
Dec  2 11:36:42 np0005542546 setroubleshoot[155655]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5ab51730-550e-4e1c-9afa-bf56b143d852
Dec  2 11:36:42 np0005542546 setroubleshoot[155655]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 11:36:42 np0005542546 setroubleshoot[155655]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5ab51730-550e-4e1c-9afa-bf56b143d852
Dec  2 11:36:42 np0005542546 setroubleshoot[155655]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 11:36:43 np0005542546 python3.9[156054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:36:43 np0005542546 systemd[1]: Reloading.
Dec  2 11:36:43 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:36:43 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:36:43 np0005542546 systemd[1]: Starting libvirt secret daemon socket...
Dec  2 11:36:43 np0005542546 systemd[1]: Listening on libvirt secret daemon socket.
Dec  2 11:36:43 np0005542546 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  2 11:36:43 np0005542546 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  2 11:36:43 np0005542546 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  2 11:36:43 np0005542546 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  2 11:36:43 np0005542546 systemd[1]: Starting libvirt secret daemon...
Dec  2 11:36:43 np0005542546 systemd[1]: Started libvirt secret daemon.
Dec  2 11:36:44 np0005542546 python3.9[156265]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:45 np0005542546 python3.9[156417]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:36:46 np0005542546 python3.9[156569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:46 np0005542546 python3.9[156692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693405.887677-1120-64802927829314/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:47 np0005542546 python3.9[156844]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:48 np0005542546 python3.9[156996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:49 np0005542546 python3.9[157074]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:49 np0005542546 python3.9[157226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:50 np0005542546 python3.9[157304]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pnr7000n recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:50 np0005542546 python3.9[157456]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:51 np0005542546 python3.9[157534]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:51 np0005542546 python3.9[157686]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:36:52 np0005542546 python3[157839]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 11:36:52 np0005542546 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  2 11:36:52 np0005542546 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  2 11:36:53 np0005542546 python3.9[157991]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:53 np0005542546 python3.9[158069]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:54 np0005542546 python3.9[158221]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:55 np0005542546 python3.9[158299]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:55 np0005542546 python3.9[158451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:56 np0005542546 python3.9[158529]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:57 np0005542546 python3.9[158681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:57 np0005542546 python3.9[158759]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:58 np0005542546 python3.9[158911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:36:59 np0005542546 python3.9[159036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693417.7798011-1245-8586173172495/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:36:59 np0005542546 python3.9[159188]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:00 np0005542546 python3.9[159340]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:01 np0005542546 python3.9[159495]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:37:01.838 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:37:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:37:01.839 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:37:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:37:01.839 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:37:01 np0005542546 python3.9[159647]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:02 np0005542546 python3.9[159800]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:37:03 np0005542546 python3.9[159954]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:03 np0005542546 python3.9[160109]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:04 np0005542546 podman[160233]: 2025-12-02 16:37:04.417258517 +0000 UTC m=+0.066557900 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 11:37:04 np0005542546 python3.9[160278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:05 np0005542546 python3.9[160404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693424.0914536-1317-221852738821547/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:05 np0005542546 python3.9[160556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:06 np0005542546 python3.9[160679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693425.3950472-1332-190611507260129/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:07 np0005542546 python3.9[160831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:07 np0005542546 python3.9[160954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693426.6909077-1347-166158219103773/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:08 np0005542546 python3.9[161106]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:37:08 np0005542546 systemd[1]: Reloading.
Dec  2 11:37:08 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:37:08 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:37:10 np0005542546 systemd[1]: Reached target edpm_libvirt.target.
Dec  2 11:37:10 np0005542546 podman[161269]: 2025-12-02 16:37:10.65642478 +0000 UTC m=+0.133993164 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 11:37:10 np0005542546 python3.9[161312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 11:37:10 np0005542546 systemd[1]: Reloading.
Dec  2 11:37:10 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:37:10 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:37:11 np0005542546 systemd[1]: Reloading.
Dec  2 11:37:11 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:37:11 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:37:11 np0005542546 systemd[1]: session-22.scope: Deactivated successfully.
Dec  2 11:37:11 np0005542546 systemd[1]: session-22.scope: Consumed 3min 20.489s CPU time.
Dec  2 11:37:11 np0005542546 systemd-logind[790]: Session 22 logged out. Waiting for processes to exit.
Dec  2 11:37:11 np0005542546 systemd-logind[790]: Removed session 22.
Dec  2 11:37:17 np0005542546 systemd-logind[790]: New session 23 of user zuul.
Dec  2 11:37:17 np0005542546 systemd[1]: Started Session 23 of User zuul.
Dec  2 11:37:18 np0005542546 python3.9[161570]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:37:20 np0005542546 python3.9[161724]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:37:20 np0005542546 network[161741]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:37:20 np0005542546 network[161742]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:37:20 np0005542546 network[161743]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:37:23 np0005542546 python3.9[162014]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:37:24 np0005542546 python3.9[162098]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:37:31 np0005542546 python3.9[162253]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:37:31 np0005542546 python3.9[162405]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:32 np0005542546 python3.9[162558]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:37:33 np0005542546 python3.9[162710]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:33 np0005542546 python3.9[162863]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:34 np0005542546 podman[162958]: 2025-12-02 16:37:34.552979118 +0000 UTC m=+0.075456437 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 11:37:34 np0005542546 python3.9[163005]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693453.4318104-95-154347138861244/.source.iscsi _original_basename=.ux9asaei follow=False checksum=5f98fef0417d6be0cf75673e514b0241f8120442 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:35 np0005542546 python3.9[163157]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:36 np0005542546 python3.9[163309]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:36 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:37:36 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:37:37 np0005542546 python3.9[163462]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:37:37 np0005542546 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  2 11:37:38 np0005542546 python3.9[163618]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:37:38 np0005542546 systemd[1]: Reloading.
Dec  2 11:37:38 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:37:38 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:37:38 np0005542546 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  2 11:37:38 np0005542546 systemd[1]: Starting Open-iSCSI...
Dec  2 11:37:38 np0005542546 kernel: Loading iSCSI transport class v2.0-870.
Dec  2 11:37:38 np0005542546 systemd[1]: Started Open-iSCSI.
Dec  2 11:37:38 np0005542546 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  2 11:37:38 np0005542546 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  2 11:37:39 np0005542546 python3.9[163818]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:37:39 np0005542546 network[163835]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:37:39 np0005542546 network[163836]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:37:39 np0005542546 network[163837]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:37:40 np0005542546 podman[163844]: 2025-12-02 16:37:40.864110462 +0000 UTC m=+0.097712990 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 11:37:45 np0005542546 python3.9[164132]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 11:37:46 np0005542546 python3.9[164284]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  2 11:37:47 np0005542546 python3.9[164440]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:48 np0005542546 python3.9[164563]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693466.9825456-172-166919081095014/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:48 np0005542546 python3.9[164715]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:50 np0005542546 python3.9[164867]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:37:50 np0005542546 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  2 11:37:50 np0005542546 systemd[1]: Stopped Load Kernel Modules.
Dec  2 11:37:50 np0005542546 systemd[1]: Stopping Load Kernel Modules...
Dec  2 11:37:50 np0005542546 systemd[1]: Starting Load Kernel Modules...
Dec  2 11:37:50 np0005542546 systemd[1]: Finished Load Kernel Modules.
Dec  2 11:37:50 np0005542546 python3.9[165023]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:37:51 np0005542546 python3.9[165175]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:37:52 np0005542546 python3.9[165327]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:37:52 np0005542546 python3.9[165479]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:37:53 np0005542546 python3.9[165602]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693472.5044513-230-192633617467115/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:54 np0005542546 python3.9[165754]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:37:55 np0005542546 python3.9[165907]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:55 np0005542546 python3.9[166059]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:56 np0005542546 python3.9[166211]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:57 np0005542546 python3.9[166363]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:57 np0005542546 python3.9[166515]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:58 np0005542546 python3.9[166667]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:59 np0005542546 python3.9[166819]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:37:59 np0005542546 python3.9[166971]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:38:00 np0005542546 python3.9[167125]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:01 np0005542546 python3.9[167277]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:38:01.839 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:38:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:38:01.841 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:38:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:38:01.841 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:38:01 np0005542546 python3.9[167429]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:02 np0005542546 python3.9[167507]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:02 np0005542546 python3.9[167659]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:03 np0005542546 python3.9[167737]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:04 np0005542546 python3.9[167889]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:04 np0005542546 podman[168013]: 2025-12-02 16:38:04.915215169 +0000 UTC m=+0.049224526 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 11:38:05 np0005542546 python3.9[168060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:05 np0005542546 python3.9[168138]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:06 np0005542546 python3.9[168290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:06 np0005542546 python3.9[168368]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:07 np0005542546 python3.9[168520]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:07 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:07 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:07 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:08 np0005542546 python3.9[168711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:08 np0005542546 python3.9[168789]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:09 np0005542546 python3.9[168941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:09 np0005542546 python3.9[169019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:10 np0005542546 python3.9[169171]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:10 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:10 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:10 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:11 np0005542546 systemd[1]: Starting Create netns directory...
Dec  2 11:38:11 np0005542546 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 11:38:11 np0005542546 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 11:38:11 np0005542546 systemd[1]: Finished Create netns directory.
Dec  2 11:38:11 np0005542546 podman[169209]: 2025-12-02 16:38:11.081849105 +0000 UTC m=+0.095205892 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 11:38:11 np0005542546 python3.9[169390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:12 np0005542546 python3.9[169542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:13 np0005542546 python3.9[169665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693492.1171489-437-99433785769403/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:13 np0005542546 python3.9[169817]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:38:14 np0005542546 python3.9[169969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:15 np0005542546 python3.9[170092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693494.1559963-462-246108635073458/.source.json _original_basename=.vlk0kybd follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:15 np0005542546 python3.9[170244]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:19 np0005542546 python3.9[170671]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  2 11:38:20 np0005542546 python3.9[170823]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:38:21 np0005542546 python3.9[170975]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 11:38:22 np0005542546 python3[171153]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:38:23 np0005542546 podman[171188]: 2025-12-02 16:38:23.180619847 +0000 UTC m=+0.046123618 container create 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd)
Dec  2 11:38:23 np0005542546 podman[171188]: 2025-12-02 16:38:23.156149135 +0000 UTC m=+0.021652926 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 11:38:23 np0005542546 python3[171153]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 11:38:23 np0005542546 python3.9[171378]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:38:24 np0005542546 python3.9[171532]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:25 np0005542546 python3.9[171608]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:38:26 np0005542546 python3.9[171759]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693505.3905623-550-44404921590360/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:26 np0005542546 python3.9[171835]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:38:26 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:26 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:26 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:27 np0005542546 python3.9[171946]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:27 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:27 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:27 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:27 np0005542546 systemd[1]: Starting multipathd container...
Dec  2 11:38:27 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:38:27 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93835e2d33f84476969eddd5b761fa1ec321ca3ba84d983c9d38c5d17d1edb11/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 11:38:27 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93835e2d33f84476969eddd5b761fa1ec321ca3ba84d983c9d38c5d17d1edb11/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 11:38:27 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.
Dec  2 11:38:27 np0005542546 podman[171985]: 2025-12-02 16:38:27.944114969 +0000 UTC m=+0.131819303 container init 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd)
Dec  2 11:38:27 np0005542546 multipathd[172001]: + sudo -E kolla_set_configs
Dec  2 11:38:27 np0005542546 podman[171985]: 2025-12-02 16:38:27.978797852 +0000 UTC m=+0.166502116 container start 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 11:38:27 np0005542546 podman[171985]: multipathd
Dec  2 11:38:27 np0005542546 systemd[1]: Started multipathd container.
Dec  2 11:38:28 np0005542546 podman[172007]: 2025-12-02 16:38:28.045538275 +0000 UTC m=+0.057433399 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  2 11:38:28 np0005542546 multipathd[172001]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:38:28 np0005542546 multipathd[172001]: INFO:__main__:Validating config file
Dec  2 11:38:28 np0005542546 multipathd[172001]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:38:28 np0005542546 multipathd[172001]: INFO:__main__:Writing out command to execute
Dec  2 11:38:28 np0005542546 systemd[1]: 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-79215ddb75f9b8ad.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:38:28 np0005542546 systemd[1]: 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-79215ddb75f9b8ad.service: Failed with result 'exit-code'.
Dec  2 11:38:28 np0005542546 multipathd[172001]: ++ cat /run_command
Dec  2 11:38:28 np0005542546 multipathd[172001]: + CMD='/usr/sbin/multipathd -d'
Dec  2 11:38:28 np0005542546 multipathd[172001]: + ARGS=
Dec  2 11:38:28 np0005542546 multipathd[172001]: + sudo kolla_copy_cacerts
Dec  2 11:38:28 np0005542546 multipathd[172001]: + [[ ! -n '' ]]
Dec  2 11:38:28 np0005542546 multipathd[172001]: + . kolla_extend_start
Dec  2 11:38:28 np0005542546 multipathd[172001]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  2 11:38:28 np0005542546 multipathd[172001]: Running command: '/usr/sbin/multipathd -d'
Dec  2 11:38:28 np0005542546 multipathd[172001]: + umask 0022
Dec  2 11:38:28 np0005542546 multipathd[172001]: + exec /usr/sbin/multipathd -d
Dec  2 11:38:28 np0005542546 multipathd[172001]: 3056.690969 | --------start up--------
Dec  2 11:38:28 np0005542546 multipathd[172001]: 3056.691105 | read /etc/multipath.conf
Dec  2 11:38:28 np0005542546 multipathd[172001]: 3056.699275 | path checkers start up
Dec  2 11:38:28 np0005542546 python3.9[172190]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:38:29 np0005542546 python3.9[172344]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:38:30 np0005542546 python3.9[172508]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:38:30 np0005542546 systemd[1]: Stopping multipathd container...
Dec  2 11:38:30 np0005542546 multipathd[172001]: 3059.042521 | exit (signal)
Dec  2 11:38:30 np0005542546 multipathd[172001]: 3059.043289 | --------shut down-------
Dec  2 11:38:30 np0005542546 systemd[1]: libpod-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope: Deactivated successfully.
Dec  2 11:38:30 np0005542546 podman[172512]: 2025-12-02 16:38:30.488196579 +0000 UTC m=+0.090364314 container died 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:38:30 np0005542546 systemd[1]: 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-79215ddb75f9b8ad.timer: Deactivated successfully.
Dec  2 11:38:30 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.
Dec  2 11:38:30 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-userdata-shm.mount: Deactivated successfully.
Dec  2 11:38:30 np0005542546 systemd[1]: var-lib-containers-storage-overlay-93835e2d33f84476969eddd5b761fa1ec321ca3ba84d983c9d38c5d17d1edb11-merged.mount: Deactivated successfully.
Dec  2 11:38:30 np0005542546 podman[172512]: 2025-12-02 16:38:30.540223688 +0000 UTC m=+0.142391373 container cleanup 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:38:30 np0005542546 podman[172512]: multipathd
Dec  2 11:38:30 np0005542546 podman[172540]: multipathd
Dec  2 11:38:30 np0005542546 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  2 11:38:30 np0005542546 systemd[1]: Stopped multipathd container.
Dec  2 11:38:30 np0005542546 systemd[1]: Starting multipathd container...
Dec  2 11:38:30 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:38:30 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93835e2d33f84476969eddd5b761fa1ec321ca3ba84d983c9d38c5d17d1edb11/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 11:38:30 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93835e2d33f84476969eddd5b761fa1ec321ca3ba84d983c9d38c5d17d1edb11/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 11:38:30 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.
Dec  2 11:38:30 np0005542546 podman[172553]: 2025-12-02 16:38:30.756056267 +0000 UTC m=+0.125592691 container init 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 11:38:30 np0005542546 multipathd[172568]: + sudo -E kolla_set_configs
Dec  2 11:38:30 np0005542546 podman[172553]: 2025-12-02 16:38:30.778420422 +0000 UTC m=+0.147956806 container start 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd)
Dec  2 11:38:30 np0005542546 podman[172553]: multipathd
Dec  2 11:38:30 np0005542546 systemd[1]: Started multipathd container.
Dec  2 11:38:30 np0005542546 multipathd[172568]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:38:30 np0005542546 multipathd[172568]: INFO:__main__:Validating config file
Dec  2 11:38:30 np0005542546 multipathd[172568]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:38:30 np0005542546 multipathd[172568]: INFO:__main__:Writing out command to execute
Dec  2 11:38:30 np0005542546 multipathd[172568]: ++ cat /run_command
Dec  2 11:38:30 np0005542546 multipathd[172568]: + CMD='/usr/sbin/multipathd -d'
Dec  2 11:38:30 np0005542546 multipathd[172568]: + ARGS=
Dec  2 11:38:30 np0005542546 multipathd[172568]: + sudo kolla_copy_cacerts
Dec  2 11:38:30 np0005542546 podman[172575]: 2025-12-02 16:38:30.851343215 +0000 UTC m=+0.063215778 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 11:38:30 np0005542546 systemd[1]: 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-3b05192d05c43bc8.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:38:30 np0005542546 systemd[1]: 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24-3b05192d05c43bc8.service: Failed with result 'exit-code'.
Dec  2 11:38:30 np0005542546 multipathd[172568]: + [[ ! -n '' ]]
Dec  2 11:38:30 np0005542546 multipathd[172568]: + . kolla_extend_start
Dec  2 11:38:30 np0005542546 multipathd[172568]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  2 11:38:30 np0005542546 multipathd[172568]: Running command: '/usr/sbin/multipathd -d'
Dec  2 11:38:30 np0005542546 multipathd[172568]: + umask 0022
Dec  2 11:38:30 np0005542546 multipathd[172568]: + exec /usr/sbin/multipathd -d
Dec  2 11:38:30 np0005542546 multipathd[172568]: 3059.480290 | --------start up--------
Dec  2 11:38:30 np0005542546 multipathd[172568]: 3059.480308 | read /etc/multipath.conf
Dec  2 11:38:30 np0005542546 multipathd[172568]: 3059.487343 | path checkers start up
Dec  2 11:38:31 np0005542546 python3.9[172759]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:32 np0005542546 python3.9[172911]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 11:38:33 np0005542546 python3.9[173063]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  2 11:38:33 np0005542546 kernel: Key type psk registered
Dec  2 11:38:33 np0005542546 python3.9[173224]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:38:34 np0005542546 python3.9[173347]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693513.4634235-630-44497729171141/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:35 np0005542546 podman[173471]: 2025-12-02 16:38:35.229169922 +0000 UTC m=+0.048564566 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 11:38:35 np0005542546 python3.9[173518]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:36 np0005542546 python3.9[173670]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:38:36 np0005542546 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  2 11:38:36 np0005542546 systemd[1]: Stopped Load Kernel Modules.
Dec  2 11:38:36 np0005542546 systemd[1]: Stopping Load Kernel Modules...
Dec  2 11:38:36 np0005542546 systemd[1]: Starting Load Kernel Modules...
Dec  2 11:38:36 np0005542546 systemd[1]: Finished Load Kernel Modules.
Dec  2 11:38:37 np0005542546 python3.9[173826]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:38:39 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:39 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:39 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:39 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:39 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:39 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:40 np0005542546 virtsecretd[156095]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  2 11:38:40 np0005542546 virtsecretd[156095]: hostname: compute-0
Dec  2 11:38:40 np0005542546 virtsecretd[156095]: nl_recv returned with error: No buffer space available
Dec  2 11:38:40 np0005542546 systemd-logind[790]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  2 11:38:40 np0005542546 systemd-logind[790]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  2 11:38:40 np0005542546 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  2 11:38:40 np0005542546 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 11:38:40 np0005542546 systemd[1]: Starting man-db-cache-update.service...
Dec  2 11:38:40 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:40 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:40 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:40 np0005542546 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 11:38:41 np0005542546 podman[174474]: 2025-12-02 16:38:41.263206826 +0000 UTC m=+0.093547901 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  2 11:38:41 np0005542546 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 11:38:41 np0005542546 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 11:38:41 np0005542546 systemd[1]: Finished man-db-cache-update.service.
Dec  2 11:38:41 np0005542546 systemd[1]: man-db-cache-update.service: Consumed 1.404s CPU time.
Dec  2 11:38:41 np0005542546 systemd[1]: run-rd7fe7f1a674a4fc599de331670e5ce8e.service: Deactivated successfully.
Dec  2 11:38:42 np0005542546 python3.9[175308]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:38:42 np0005542546 systemd[1]: Stopping Open-iSCSI...
Dec  2 11:38:42 np0005542546 iscsid[163657]: iscsid shutting down.
Dec  2 11:38:42 np0005542546 systemd[1]: iscsid.service: Deactivated successfully.
Dec  2 11:38:42 np0005542546 systemd[1]: Stopped Open-iSCSI.
Dec  2 11:38:42 np0005542546 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  2 11:38:42 np0005542546 systemd[1]: Starting Open-iSCSI...
Dec  2 11:38:42 np0005542546 systemd[1]: Started Open-iSCSI.
Dec  2 11:38:42 np0005542546 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  2 11:38:42 np0005542546 python3.9[175463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:38:43 np0005542546 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  2 11:38:44 np0005542546 python3.9[175622]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:45 np0005542546 python3.9[175775]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:38:45 np0005542546 systemd[1]: Reloading.
Dec  2 11:38:45 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:38:45 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:38:46 np0005542546 python3.9[175960]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:38:46 np0005542546 network[175977]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:38:46 np0005542546 network[175978]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:38:46 np0005542546 network[175979]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:38:49 np0005542546 python3.9[176253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:50 np0005542546 python3.9[176406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:51 np0005542546 python3.9[176559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:52 np0005542546 python3.9[176712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:52 np0005542546 python3.9[176865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:53 np0005542546 python3.9[177018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:54 np0005542546 python3.9[177171]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:55 np0005542546 python3.9[177324]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:38:56 np0005542546 python3.9[177477]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:56 np0005542546 python3.9[177629]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:57 np0005542546 python3.9[177781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:57 np0005542546 python3.9[177933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:58 np0005542546 python3.9[178085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:59 np0005542546 python3.9[178237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:38:59 np0005542546 python3.9[178389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:00 np0005542546 python3.9[178541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:01 np0005542546 python3.9[178693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:01 np0005542546 podman[178718]: 2025-12-02 16:39:01.279629175 +0000 UTC m=+0.098399395 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 11:39:01 np0005542546 python3.9[178862]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:39:01.839 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:39:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:39:01.840 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:39:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:39:01.840 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:39:02 np0005542546 python3.9[179014]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:02 np0005542546 python3.9[179166]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:03 np0005542546 python3.9[179318]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:04 np0005542546 python3.9[179470]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:04 np0005542546 python3.9[179622]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:05 np0005542546 python3.9[179774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:06 np0005542546 podman[179898]: 2025-12-02 16:39:06.022632142 +0000 UTC m=+0.080784700 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  2 11:39:06 np0005542546 python3.9[179944]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:07 np0005542546 python3.9[180097]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:39:07 np0005542546 python3.9[180249]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:39:07 np0005542546 systemd[1]: Reloading.
Dec  2 11:39:08 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:39:08 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:39:08 np0005542546 python3.9[180437]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:09 np0005542546 python3.9[180590]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:10 np0005542546 python3.9[180743]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:10 np0005542546 python3.9[180896]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:11 np0005542546 python3.9[181049]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:11 np0005542546 podman[181051]: 2025-12-02 16:39:11.549079984 +0000 UTC m=+0.085925662 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 11:39:12 np0005542546 python3.9[181226]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:12 np0005542546 python3.9[181379]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:13 np0005542546 python3.9[181532]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:39:14 np0005542546 python3.9[181685]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:15 np0005542546 python3.9[181837]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:16 np0005542546 python3.9[181989]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:17 np0005542546 python3.9[182141]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:18 np0005542546 python3.9[182293]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:18 np0005542546 python3.9[182445]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:19 np0005542546 python3.9[182597]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:19 np0005542546 python3.9[182749]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:20 np0005542546 python3.9[182901]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:21 np0005542546 python3.9[183053]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:26 np0005542546 python3.9[183205]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  2 11:39:26 np0005542546 python3.9[183358]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:39:27 np0005542546 python3.9[183516]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 11:39:28 np0005542546 systemd-logind[790]: New session 24 of user zuul.
Dec  2 11:39:28 np0005542546 systemd[1]: Started Session 24 of User zuul.
Dec  2 11:39:29 np0005542546 systemd[1]: session-24.scope: Deactivated successfully.
Dec  2 11:39:29 np0005542546 systemd-logind[790]: Session 24 logged out. Waiting for processes to exit.
Dec  2 11:39:29 np0005542546 systemd-logind[790]: Removed session 24.
Dec  2 11:39:29 np0005542546 python3.9[183702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:30 np0005542546 python3.9[183823]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693569.2526681-1229-946781789757/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:30 np0005542546 python3.9[183973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:31 np0005542546 python3.9[184049]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:31 np0005542546 podman[184050]: 2025-12-02 16:39:31.447248607 +0000 UTC m=+0.073199312 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  2 11:39:31 np0005542546 python3.9[184220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:32 np0005542546 python3.9[184341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693571.4967775-1229-29465515203414/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:32 np0005542546 python3.9[184491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:33 np0005542546 python3.9[184612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693572.5346417-1229-150332523218732/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:34 np0005542546 python3.9[184762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:34 np0005542546 python3.9[184885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693573.591689-1229-65457873544358/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:35 np0005542546 python3.9[185035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:35 np0005542546 python3.9[185156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693574.7322896-1229-24322413676055/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:36 np0005542546 podman[185280]: 2025-12-02 16:39:36.128095472 +0000 UTC m=+0.058037782 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:39:36 np0005542546 python3.9[185327]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:36 np0005542546 python3.9[185479]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:37 np0005542546 python3.9[185631]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:38 np0005542546 python3.9[185783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:39 np0005542546 python3.9[185906]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764693578.004401-1336-203518758301238/.source _original_basename=.4j1d7efb follow=False checksum=d5f219720e542d1b99d0ca53b8cf969d55812519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  2 11:39:39 np0005542546 python3.9[186058]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:40 np0005542546 python3.9[186210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:41 np0005542546 python3.9[186331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693580.065608-1362-256877261095917/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:41 np0005542546 podman[186455]: 2025-12-02 16:39:41.682640286 +0000 UTC m=+0.090611686 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 11:39:41 np0005542546 python3.9[186500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:39:42 np0005542546 python3.9[186629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693581.2982705-1377-48564772032889/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:39:43 np0005542546 python3.9[186781]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  2 11:39:43 np0005542546 python3.9[186933]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:39:44 np0005542546 python3[187085]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:39:45 np0005542546 podman[187122]: 2025-12-02 16:39:44.927209453 +0000 UTC m=+0.025080477 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 11:39:45 np0005542546 podman[187122]: 2025-12-02 16:39:45.074257705 +0000 UTC m=+0.172128709 container create 2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  2 11:39:45 np0005542546 python3[187085]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  2 11:39:45 np0005542546 python3.9[187311]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:46 np0005542546 python3.9[187465]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  2 11:39:47 np0005542546 python3.9[187617]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:39:48 np0005542546 python3[187769]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:39:48 np0005542546 podman[187806]: 2025-12-02 16:39:48.471884361 +0000 UTC m=+0.072931086 container create f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 11:39:48 np0005542546 podman[187806]: 2025-12-02 16:39:48.435322386 +0000 UTC m=+0.036369201 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 11:39:48 np0005542546 python3[187769]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  2 11:39:49 np0005542546 python3.9[187996]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:50 np0005542546 python3.9[188150]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:50 np0005542546 python3.9[188301]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693590.0884624-1469-25177849679896/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:39:51 np0005542546 python3.9[188377]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:39:51 np0005542546 systemd[1]: Reloading.
Dec  2 11:39:51 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:39:51 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:39:52 np0005542546 python3.9[188488]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:39:52 np0005542546 systemd[1]: Reloading.
Dec  2 11:39:52 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:39:52 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:39:52 np0005542546 systemd[1]: Starting nova_compute container...
Dec  2 11:39:52 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:39:52 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:52 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:52 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:52 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:52 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:52 np0005542546 podman[188528]: 2025-12-02 16:39:52.666245772 +0000 UTC m=+0.105288724 container init f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  2 11:39:52 np0005542546 podman[188528]: 2025-12-02 16:39:52.676431634 +0000 UTC m=+0.115474576 container start f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:39:52 np0005542546 podman[188528]: nova_compute
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + sudo -E kolla_set_configs
Dec  2 11:39:52 np0005542546 systemd[1]: Started nova_compute container.
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Validating config file
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying service configuration files
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Deleting /etc/ceph
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Creating directory /etc/ceph
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /etc/ceph
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Writing out command to execute
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:52 np0005542546 nova_compute[188543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 11:39:52 np0005542546 nova_compute[188543]: ++ cat /run_command
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + CMD=nova-compute
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + ARGS=
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + sudo kolla_copy_cacerts
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + [[ ! -n '' ]]
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + . kolla_extend_start
Dec  2 11:39:52 np0005542546 nova_compute[188543]: Running command: 'nova-compute'
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + echo 'Running command: '\''nova-compute'\'''
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + umask 0022
Dec  2 11:39:52 np0005542546 nova_compute[188543]: + exec nova-compute
Dec  2 11:39:53 np0005542546 python3.9[188704]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:54 np0005542546 python3.9[188855]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.778 188547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.779 188547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.779 188547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.779 188547 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  2 11:39:54 np0005542546 python3.9[189006]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.913 188547 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.926 188547 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 11:39:54 np0005542546 nova_compute[188543]: 2025-12-02 16:39:54.926 188547 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.683 188547 INFO nova.virt.driver [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.802 188547 INFO nova.compute.provider_config [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.815 188547 DEBUG oslo_concurrency.lockutils [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.815 188547 DEBUG oslo_concurrency.lockutils [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.815 188547 DEBUG oslo_concurrency.lockutils [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.815 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.816 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.817 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.817 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.817 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.817 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.817 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.818 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.819 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.819 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.819 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.819 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.819 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.820 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.820 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.820 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.820 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.820 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.821 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.822 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.822 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.822 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.822 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.822 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.823 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.823 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.823 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.823 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.823 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.824 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.825 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.825 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.825 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.825 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.825 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.826 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.827 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.828 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.828 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.828 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.828 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.828 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.829 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.829 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.829 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.829 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.829 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.830 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.830 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.830 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.830 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.830 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.831 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.831 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.831 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.831 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.831 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.832 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.833 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.834 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.834 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.834 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.834 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.834 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.835 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.836 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.836 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.836 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.836 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.836 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.837 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.838 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.839 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.840 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.840 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.840 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.840 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.840 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.841 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.842 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.842 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.842 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.842 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.842 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.843 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.843 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.843 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.843 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.843 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.844 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.845 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.845 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.845 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.845 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.845 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.846 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.846 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.846 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.846 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.847 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.848 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.849 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.850 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.851 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.852 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.853 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.854 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.855 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.856 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.857 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.858 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.859 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.860 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.861 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.862 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.863 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.864 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.865 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.866 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.867 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.868 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.869 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.870 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.871 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.872 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.873 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.874 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.874 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.874 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.874 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.874 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.875 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.876 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.877 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.878 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.879 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.880 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.881 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.882 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.883 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.884 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.885 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.886 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.887 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.888 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.889 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.890 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.891 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.892 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.893 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 WARNING oslo_config.cfg [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  2 11:39:55 np0005542546 nova_compute[188543]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  2 11:39:55 np0005542546 nova_compute[188543]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  2 11:39:55 np0005542546 nova_compute[188543]: and ``live_migration_inbound_addr`` respectively.
Dec  2 11:39:55 np0005542546 nova_compute[188543]: ).  Its value may be silently ignored in the future.#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.894 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 python3.9[189161]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.895 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.896 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.897 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.898 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.899 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.900 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.901 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.902 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.903 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.904 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.905 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.905 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.905 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.905 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.905 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.906 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.907 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.908 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.909 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.910 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.911 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.912 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.913 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.914 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.915 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.916 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.917 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.918 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.919 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.920 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.921 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.922 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.923 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.924 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.925 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.926 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.927 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.928 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.929 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.930 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.930 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.930 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.930 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.930 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.931 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.932 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.933 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.934 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.935 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.935 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.935 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.935 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.935 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.936 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.937 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.938 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.939 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.940 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.941 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.942 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.943 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.944 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.945 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.946 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.947 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.947 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.947 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.947 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.948 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.949 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.950 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.950 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.950 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.950 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.950 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.951 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.952 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.953 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.954 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.955 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.956 188547 DEBUG oslo_service.service [None req-2fdc6142-c9f5-4f90-a335-d625572c391f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.957 188547 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.970 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.971 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.971 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  2 11:39:55 np0005542546 nova_compute[188543]: 2025-12-02 16:39:55.971 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  2 11:39:56 np0005542546 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 11:39:56 np0005542546 systemd[1]: Started libvirt QEMU daemon.
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.060 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8f4abebfd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.063 188547 DEBUG nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8f4abebfd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.063 188547 INFO nova.virt.libvirt.driver [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.078 188547 WARNING nova.virt.libvirt.driver [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.079 188547 DEBUG nova.virt.libvirt.volume.mount [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  2 11:39:56 np0005542546 python3.9[189388]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:39:56 np0005542546 systemd[1]: Stopping nova_compute container...
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 2025-12-02 16:39:56.897 188547 INFO nova.virt.libvirt.host [None req-42c601a8-316e-4215-a77c-d93d15f45bf9 - - - - - -] Libvirt host capabilities <capabilities>
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  <host>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <uuid>e8b28829-c1bb-40ef-87e7-81771a26068f</uuid>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <cpu>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <arch>x86_64</arch>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <model>EPYC-Rome-v4</model>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <vendor>AMD</vendor>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <microcode version='16777317'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <signature family='23' model='49' stepping='0'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='x2apic'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='tsc-deadline'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='osxsave'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='hypervisor'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='tsc_adjust'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='spec-ctrl'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='stibp'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='arch-capabilities'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='ssbd'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='cmp_legacy'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='topoext'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='virt-ssbd'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='lbrv'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='tsc-scale'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='vmcb-clean'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='pause-filter'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='pfthreshold'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='svme-addr-chk'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='rdctl-no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='skip-l1dfl-vmentry'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='mds-no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <feature name='pschange-mc-no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <pages unit='KiB' size='4'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <pages unit='KiB' size='2048'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <pages unit='KiB' size='1048576'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </cpu>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <power_management>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <suspend_mem/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <suspend_disk/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <suspend_hybrid/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </power_management>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <iommu support='no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <migration_features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <live/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <uri_transports>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:        <uri_transport>tcp</uri_transport>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:        <uri_transport>rdma</uri_transport>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      </uri_transports>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </migration_features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <topology>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <cells num='1'>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:        <cell id='0'>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <memory unit='KiB'>7864320</memory>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <pages unit='KiB' size='2048'>0</pages>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <distances>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <sibling id='0' value='10'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          </distances>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          <cpus num='8'>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:          </cpus>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:        </cell>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      </cells>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </topology>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <cache>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </cache>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <secmodel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <model>selinux</model>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <doi>0</doi>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </secmodel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <secmodel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <model>dac</model>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <doi>0</doi>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </secmodel>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  </host>
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  <guest>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <os_type>hvm</os_type>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <arch name='i686'>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <wordsize>32</wordsize>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <domain type='qemu'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <domain type='kvm'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </arch>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <pae/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <nonpae/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <acpi default='on' toggle='yes'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <apic default='on' toggle='no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <cpuselection/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <deviceboot/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <disksnapshot default='on' toggle='no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <externalSnapshot/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  </guest>
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  <guest>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <os_type>hvm</os_type>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <arch name='x86_64'>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <wordsize>64</wordsize>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <domain type='qemu'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <domain type='kvm'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </arch>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    <features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <acpi default='on' toggle='yes'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <apic default='on' toggle='no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <cpuselection/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <deviceboot/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <disksnapshot default='on' toggle='no'/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:      <externalSnapshot/>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:    </features>
Dec  2 11:39:56 np0005542546 nova_compute[188543]:  </guest>
Dec  2 11:39:56 np0005542546 nova_compute[188543]: 
Dec  2 11:39:56 np0005542546 nova_compute[188543]: </capabilities>
Dec  2 11:39:56 np0005542546 nova_compute[188543]: #033[00m
Dec  2 11:39:56 np0005542546 virtqemud[189206]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  2 11:39:56 np0005542546 virtqemud[189206]: hostname: compute-0
Dec  2 11:39:56 np0005542546 virtqemud[189206]: End of file while reading data: Input/output error
Dec  2 11:39:56 np0005542546 systemd[1]: libpod-f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6.scope: Deactivated successfully.
Dec  2 11:39:56 np0005542546 systemd[1]: libpod-f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6.scope: Consumed 2.647s CPU time.
Dec  2 11:39:56 np0005542546 podman[189400]: 2025-12-02 16:39:56.93833513 +0000 UTC m=+0.072497863 container died f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Dec  2 11:39:56 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6-userdata-shm.mount: Deactivated successfully.
Dec  2 11:39:56 np0005542546 systemd[1]: var-lib-containers-storage-overlay-410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916-merged.mount: Deactivated successfully.
Dec  2 11:39:56 np0005542546 podman[189400]: 2025-12-02 16:39:56.991479275 +0000 UTC m=+0.125641998 container cleanup f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  2 11:39:56 np0005542546 podman[189400]: nova_compute
Dec  2 11:39:57 np0005542546 podman[189431]: nova_compute
Dec  2 11:39:57 np0005542546 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  2 11:39:57 np0005542546 systemd[1]: Stopped nova_compute container.
Dec  2 11:39:57 np0005542546 systemd[1]: Starting nova_compute container...
Dec  2 11:39:57 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:39:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e7b0bbe5a183a1cb2e1a04d39cec2b39149b168d267559e64d3e8a8df1916/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:57 np0005542546 podman[189444]: 2025-12-02 16:39:57.191853617 +0000 UTC m=+0.097036105 container init f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Dec  2 11:39:57 np0005542546 podman[189444]: 2025-12-02 16:39:57.200432875 +0000 UTC m=+0.105615343 container start f5a990658a4a5313eddabe278676313c6ec7840a1a588f97be1b39c371908da6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec  2 11:39:57 np0005542546 podman[189444]: nova_compute
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + sudo -E kolla_set_configs
Dec  2 11:39:57 np0005542546 systemd[1]: Started nova_compute container.
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Validating config file
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying service configuration files
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /etc/ceph
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Creating directory /etc/ceph
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /etc/ceph
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Writing out command to execute
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:57 np0005542546 nova_compute[189459]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 11:39:57 np0005542546 nova_compute[189459]: ++ cat /run_command
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + CMD=nova-compute
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + ARGS=
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + sudo kolla_copy_cacerts
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + [[ ! -n '' ]]
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + . kolla_extend_start
Dec  2 11:39:57 np0005542546 nova_compute[189459]: Running command: 'nova-compute'
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + echo 'Running command: '\''nova-compute'\'''
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + umask 0022
Dec  2 11:39:57 np0005542546 nova_compute[189459]: + exec nova-compute
Dec  2 11:39:57 np0005542546 python3.9[189622]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  2 11:39:58 np0005542546 systemd[1]: Started libpod-conmon-2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62.scope.
Dec  2 11:39:58 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:39:58 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb43973c342d075e03ec62df98c8aaae0b1c4c2554ba158c2fad9e8d4c6a04b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:58 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb43973c342d075e03ec62df98c8aaae0b1c4c2554ba158c2fad9e8d4c6a04b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:58 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb43973c342d075e03ec62df98c8aaae0b1c4c2554ba158c2fad9e8d4c6a04b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  2 11:39:58 np0005542546 podman[189646]: 2025-12-02 16:39:58.196179504 +0000 UTC m=+0.121811462 container init 2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 11:39:58 np0005542546 podman[189646]: 2025-12-02 16:39:58.20288163 +0000 UTC m=+0.128513578 container start 2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  2 11:39:58 np0005542546 python3.9[189622]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Applying nova statedir ownership
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  2 11:39:58 np0005542546 nova_compute_init[189669]: INFO:nova_statedir:Nova statedir ownership complete
Dec  2 11:39:58 np0005542546 systemd[1]: libpod-2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62.scope: Deactivated successfully.
Dec  2 11:39:58 np0005542546 podman[189682]: 2025-12-02 16:39:58.310992631 +0000 UTC m=+0.029026417 container died 2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:39:58 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62-userdata-shm.mount: Deactivated successfully.
Dec  2 11:39:58 np0005542546 systemd[1]: var-lib-containers-storage-overlay-bcb43973c342d075e03ec62df98c8aaae0b1c4c2554ba158c2fad9e8d4c6a04b-merged.mount: Deactivated successfully.
Dec  2 11:39:58 np0005542546 podman[189682]: 2025-12-02 16:39:58.34017286 +0000 UTC m=+0.058206626 container cleanup 2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  2 11:39:58 np0005542546 systemd[1]: libpod-conmon-2c0018f438f368d9db7706ab064721957719deb758180c6f59e72c5c7f269a62.scope: Deactivated successfully.
Dec  2 11:39:58 np0005542546 systemd[1]: session-23.scope: Deactivated successfully.
Dec  2 11:39:58 np0005542546 systemd[1]: session-23.scope: Consumed 1min 54.124s CPU time.
Dec  2 11:39:58 np0005542546 systemd-logind[790]: Session 23 logged out. Waiting for processes to exit.
Dec  2 11:39:58 np0005542546 systemd-logind[790]: Removed session 23.
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.254 189463 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.254 189463 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.254 189463 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.254 189463 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.377 189463 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.400 189463 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 11:39:59 np0005542546 nova_compute[189459]: 2025-12-02 16:39:59.400 189463 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.086 189463 INFO nova.virt.driver [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.202 189463 INFO nova.compute.provider_config [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.282 189463 DEBUG oslo_concurrency.lockutils [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.282 189463 DEBUG oslo_concurrency.lockutils [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.283 189463 DEBUG oslo_concurrency.lockutils [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.283 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.283 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.283 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.284 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.284 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.284 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.284 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.285 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.285 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.285 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.285 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.285 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.286 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.286 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.286 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.286 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.286 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.287 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.287 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.287 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.287 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.287 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.288 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.288 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.288 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.288 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.288 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.289 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.289 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.289 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.289 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.289 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.290 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.290 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.290 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.290 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.290 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.291 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.291 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.291 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.291 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.291 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.292 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.292 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.292 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.292 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.292 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.293 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.293 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.293 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.293 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.293 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.294 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.295 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.296 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.296 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.296 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.296 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.296 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.297 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.297 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.297 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.297 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.298 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.298 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.298 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.298 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.298 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.299 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.299 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.299 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.300 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.300 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.300 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.300 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.300 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.301 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.302 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.303 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.304 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.305 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.306 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.307 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.308 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.309 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.309 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.309 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.309 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.309 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.310 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.310 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.310 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.310 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.310 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.311 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.311 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.311 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.311 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.311 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.312 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.312 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.312 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.312 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.312 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.313 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.313 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.313 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.313 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.314 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.314 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.314 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.314 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.314 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.315 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.315 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.315 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.315 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.316 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.316 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.316 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.316 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.316 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.317 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.317 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.317 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.317 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.318 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.318 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.318 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.318 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.318 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.319 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.319 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.319 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.319 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.319 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.320 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.320 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.320 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.320 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.321 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.321 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.321 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.321 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.321 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.322 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.322 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.322 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.322 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.322 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.323 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.324 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.325 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.326 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.327 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.327 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.327 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.327 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.327 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.328 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.329 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.330 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.330 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.330 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.330 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.330 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.331 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.332 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.333 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.334 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.335 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.336 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.337 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.338 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.339 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.340 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.341 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.342 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.343 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.344 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.345 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.346 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.346 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.346 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.346 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.347 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.348 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.349 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.350 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.350 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.350 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.350 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.351 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.351 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.351 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.351 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.351 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.352 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.352 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.352 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.352 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.352 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.353 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.353 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.353 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.353 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.353 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.354 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.355 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.356 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.357 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.358 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.358 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.358 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.358 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.358 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.359 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.360 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.361 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.362 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.363 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.364 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.365 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.366 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.367 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.368 189463 WARNING oslo_config.cfg [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  2 11:40:00 np0005542546 nova_compute[189459]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  2 11:40:00 np0005542546 nova_compute[189459]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  2 11:40:00 np0005542546 nova_compute[189459]: and ``live_migration_inbound_addr`` respectively.
Dec  2 11:40:00 np0005542546 nova_compute[189459]: ).  Its value may be silently ignored in the future.#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.369 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.369 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.369 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.369 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.369 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.370 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.371 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.372 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.373 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.373 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.373 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.373 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.373 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.374 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.374 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.374 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.374 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.374 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.375 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.376 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.376 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.376 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.376 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.376 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.377 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.377 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.377 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.377 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.377 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.378 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.379 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.380 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.381 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.381 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.381 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.381 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.381 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.382 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.382 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.382 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.382 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.382 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.383 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.383 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.383 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.383 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.383 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.384 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.385 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.386 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.387 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.388 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.389 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.389 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.389 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.389 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.389 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.390 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.390 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.390 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.390 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.390 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.391 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.391 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.391 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.391 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.391 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.392 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.393 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.393 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.393 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.394 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.395 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.395 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.395 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.395 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.395 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.396 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.396 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.396 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.396 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.396 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.397 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.397 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.397 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.397 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.397 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.398 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.399 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.399 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.399 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.399 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.399 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.400 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.400 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.400 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.400 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.400 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.401 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.401 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.401 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.401 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.401 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.402 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.402 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.402 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.402 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.402 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.403 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.404 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.404 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.404 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.404 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.405 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.405 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.405 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.405 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.405 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.406 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.406 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.406 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.406 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.406 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.407 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.407 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.407 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.407 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.407 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.408 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.408 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.408 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.408 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.408 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.409 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.409 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.409 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.409 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.409 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.410 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.410 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.410 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.410 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.410 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.411 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.411 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.411 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.411 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.411 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.412 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.413 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.414 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.414 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.414 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.414 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.414 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.415 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.415 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.415 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.415 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.415 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.416 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.416 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.416 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.416 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.416 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.417 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.417 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.417 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.417 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.417 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.418 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.419 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.420 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.420 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.420 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.420 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.420 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.421 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.421 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.421 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.421 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.421 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.422 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.423 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.423 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.423 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.423 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.423 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.424 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.425 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.426 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.427 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.427 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.427 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.427 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.427 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.428 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.428 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.428 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.428 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.428 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.429 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.430 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.430 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.430 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.430 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.430 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.431 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.432 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.432 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.432 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.432 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.432 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.433 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.433 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.433 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.433 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.433 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.434 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.434 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.434 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.434 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.434 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.435 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.435 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.435 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.435 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.435 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.436 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.436 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.436 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.436 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.436 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.437 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.437 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.437 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.437 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.438 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.438 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.438 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.438 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.438 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.439 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.439 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.439 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.439 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.439 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.440 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.440 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.440 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.440 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.440 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.441 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.441 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.441 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.441 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.441 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.442 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.442 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.442 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.442 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.442 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.443 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.443 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.443 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.443 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.443 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.444 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.445 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.445 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.445 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.445 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.445 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.446 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.446 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.446 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.446 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.446 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.447 189463 DEBUG oslo_service.service [None req-0b3f1a3c-1b90-4ce6-8cc7-b7cee48926c3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.448 189463 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.870 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.871 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.871 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.872 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.891 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0c957a12b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.894 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0c957a12b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.896 189463 INFO nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.902 189463 INFO nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Libvirt host capabilities <capabilities>
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <host>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <uuid>e8b28829-c1bb-40ef-87e7-81771a26068f</uuid>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <cpu>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <arch>x86_64</arch>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model>EPYC-Rome-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <vendor>AMD</vendor>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <microcode version='16777317'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <signature family='23' model='49' stepping='0'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='x2apic'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='tsc-deadline'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='osxsave'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='hypervisor'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='tsc_adjust'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='spec-ctrl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='stibp'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='arch-capabilities'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='ssbd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='cmp_legacy'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='topoext'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='virt-ssbd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='lbrv'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='tsc-scale'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='vmcb-clean'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='pause-filter'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='pfthreshold'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='svme-addr-chk'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='rdctl-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='skip-l1dfl-vmentry'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='mds-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature name='pschange-mc-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <pages unit='KiB' size='4'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <pages unit='KiB' size='2048'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <pages unit='KiB' size='1048576'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </cpu>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <power_management>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <suspend_mem/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <suspend_disk/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <suspend_hybrid/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </power_management>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <iommu support='no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <migration_features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <live/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <uri_transports>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <uri_transport>tcp</uri_transport>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <uri_transport>rdma</uri_transport>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </uri_transports>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </migration_features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <topology>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <cells num='1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <cell id='0'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <memory unit='KiB'>7864320</memory>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <pages unit='KiB' size='2048'>0</pages>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <distances>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <sibling id='0' value='10'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          </distances>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          <cpus num='8'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:          </cpus>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        </cell>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </cells>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </topology>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <cache>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </cache>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <secmodel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model>selinux</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <doi>0</doi>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </secmodel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <secmodel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model>dac</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <doi>0</doi>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </secmodel>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  </host>
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <guest>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <os_type>hvm</os_type>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <arch name='i686'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <wordsize>32</wordsize>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <domain type='qemu'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <domain type='kvm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </arch>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <pae/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <nonpae/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <acpi default='on' toggle='yes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <apic default='on' toggle='no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <cpuselection/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <deviceboot/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <disksnapshot default='on' toggle='no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <externalSnapshot/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  </guest>
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <guest>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <os_type>hvm</os_type>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <arch name='x86_64'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <wordsize>64</wordsize>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <domain type='qemu'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <domain type='kvm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </arch>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <acpi default='on' toggle='yes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <apic default='on' toggle='no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <cpuselection/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <deviceboot/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <disksnapshot default='on' toggle='no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <externalSnapshot/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </features>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  </guest>
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 
Dec  2 11:40:00 np0005542546 nova_compute[189459]: </capabilities>
Dec  2 11:40:00 np0005542546 nova_compute[189459]: #033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.914 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 11:40:00 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.941 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  2 11:40:00 np0005542546 nova_compute[189459]: <domainCapabilities>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <domain>kvm</domain>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <arch>i686</arch>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <vcpu max='4096'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <iothreads supported='yes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <os supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <enum name='firmware'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <loader supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>rom</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>pflash</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <enum name='readonly'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>yes</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <enum name='secure'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </loader>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  </os>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:  <cpu>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <mode name='host-passthrough' supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <enum name='hostPassthroughMigratable'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <mode name='maximum' supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <enum name='maximumMigratable'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <mode name='host-model' supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <vendor>AMD</vendor>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='x2apic'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='hypervisor'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='stibp'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='ssbd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='overflow-recov'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='succor'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='ibrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='lbrv'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-scale'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='flushbyasid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='pause-filter'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='pfthreshold'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <feature policy='disable' name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:    <mode name='custom' supported='yes'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-IBRS'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v4'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Denverton'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Dhyana-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v4'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx10'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx10-128'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx10-256'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx10-512'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-IBRS'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v4'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v1'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v2'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v3'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v4'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v5'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v6'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v7'>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:00 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <memoryBacking supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='sourceType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>anonymous</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>memfd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </memoryBacking>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <disk supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='diskDevice'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>disk</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cdrom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>floppy</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>lun</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>fdc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>sata</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </disk>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <graphics supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vnc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egl-headless</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </graphics>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <video supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='modelType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vga</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cirrus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>none</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>bochs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ramfb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </video>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hostdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='mode'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>subsystem</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='startupPolicy'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>mandatory</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>requisite</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>optional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='subsysType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pci</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='capsType'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='pciBackend'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hostdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <rng supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>random</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </rng>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <filesystem supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='driverType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>path</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>handle</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtiofs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </filesystem>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <tpm supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-tis</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-crb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emulator</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>external</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendVersion'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>2.0</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </tpm>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <redirdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </redirdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <channel supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </channel>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <crypto supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </crypto>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <interface supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>passt</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </interface>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <panic supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>isa</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>hyperv</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </panic>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <console supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>null</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dev</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pipe</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stdio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>udp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tcp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu-vdagent</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </console>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <gic supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <vmcoreinfo supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <genid supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backingStoreInput supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backup supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <async-teardown supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <ps2 supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sev supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sgx supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hyperv supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='features'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>relaxed</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vapic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>spinlocks</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vpindex</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>runtime</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>synic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stimer</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reset</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vendor_id</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>frequencies</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reenlightenment</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tlbflush</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ipi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>avic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emsr_bitmap</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>xmm_input</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <spinlocks>4095</spinlocks>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <stimer_direct>on</stimer_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hyperv>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <launchSecurity supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='sectype'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tdx</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </launchSecurity>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: </domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:00.950 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  2 11:40:01 np0005542546 nova_compute[189459]: <domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <domain>kvm</domain>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <arch>i686</arch>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <vcpu max='240'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <iothreads supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <os supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='firmware'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <loader supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>rom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pflash</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='readonly'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>yes</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='secure'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </loader>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </os>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-passthrough' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='hostPassthroughMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='maximum' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='maximumMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-model' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <vendor>AMD</vendor>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='x2apic'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='hypervisor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='stibp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='overflow-recov'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='succor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lbrv'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-scale'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='flushbyasid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pause-filter'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pfthreshold'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='disable' name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='custom' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Dhyana-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-128'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-256'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-512'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v6'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v7'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <memoryBacking supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='sourceType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>anonymous</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>memfd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </memoryBacking>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <disk supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='diskDevice'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>disk</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cdrom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>floppy</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>lun</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ide</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>fdc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>sata</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </disk>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <graphics supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vnc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egl-headless</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </graphics>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <video supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='modelType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vga</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cirrus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>none</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>bochs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ramfb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </video>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hostdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='mode'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>subsystem</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='startupPolicy'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>mandatory</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>requisite</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>optional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='subsysType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pci</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='capsType'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='pciBackend'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hostdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <rng supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>random</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </rng>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <filesystem supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='driverType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>path</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>handle</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtiofs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </filesystem>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <tpm supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-tis</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-crb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emulator</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>external</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendVersion'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>2.0</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </tpm>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <redirdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </redirdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <channel supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </channel>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <crypto supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </crypto>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <interface supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>passt</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </interface>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <panic supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>isa</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>hyperv</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </panic>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <console supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>null</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dev</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pipe</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stdio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>udp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tcp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu-vdagent</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </console>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <gic supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <vmcoreinfo supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <genid supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backingStoreInput supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backup supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <async-teardown supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <ps2 supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sev supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sgx supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hyperv supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='features'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>relaxed</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vapic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>spinlocks</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vpindex</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>runtime</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>synic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stimer</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reset</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vendor_id</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>frequencies</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reenlightenment</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tlbflush</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ipi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>avic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emsr_bitmap</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>xmm_input</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <spinlocks>4095</spinlocks>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <stimer_direct>on</stimer_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hyperv>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <launchSecurity supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='sectype'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tdx</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </launchSecurity>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: </domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.000 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.005 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  2 11:40:01 np0005542546 nova_compute[189459]: <domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <domain>kvm</domain>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <arch>x86_64</arch>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <vcpu max='4096'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <iothreads supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <os supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='firmware'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>efi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <loader supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>rom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pflash</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='readonly'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>yes</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='secure'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>yes</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </loader>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </os>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-passthrough' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='hostPassthroughMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='maximum' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='maximumMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-model' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <vendor>AMD</vendor>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='x2apic'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='hypervisor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='stibp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='overflow-recov'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='succor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lbrv'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-scale'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='flushbyasid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pause-filter'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pfthreshold'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='disable' name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='custom' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Dhyana-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-128'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-256'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-512'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v6'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v7'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <memoryBacking supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='sourceType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>anonymous</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>memfd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </memoryBacking>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <disk supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='diskDevice'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>disk</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cdrom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>floppy</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>lun</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>fdc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>sata</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </disk>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <graphics supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vnc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egl-headless</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </graphics>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <video supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='modelType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vga</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cirrus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>none</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>bochs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ramfb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </video>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hostdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='mode'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>subsystem</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='startupPolicy'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>mandatory</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>requisite</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>optional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='subsysType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pci</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='capsType'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='pciBackend'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hostdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <rng supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>random</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </rng>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <filesystem supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='driverType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>path</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>handle</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtiofs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </filesystem>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <tpm supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-tis</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-crb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emulator</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>external</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendVersion'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>2.0</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </tpm>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <redirdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </redirdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <channel supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </channel>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <crypto supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </crypto>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <interface supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>passt</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </interface>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <panic supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>isa</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>hyperv</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </panic>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <console supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>null</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dev</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pipe</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stdio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>udp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tcp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu-vdagent</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </console>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <gic supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <vmcoreinfo supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <genid supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backingStoreInput supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backup supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <async-teardown supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <ps2 supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sev supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sgx supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hyperv supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='features'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>relaxed</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vapic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>spinlocks</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vpindex</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>runtime</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>synic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stimer</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reset</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vendor_id</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>frequencies</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reenlightenment</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tlbflush</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ipi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>avic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emsr_bitmap</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>xmm_input</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <spinlocks>4095</spinlocks>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <stimer_direct>on</stimer_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hyperv>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <launchSecurity supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='sectype'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tdx</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </launchSecurity>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: </domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.073 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  2 11:40:01 np0005542546 nova_compute[189459]: <domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <domain>kvm</domain>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <arch>x86_64</arch>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <vcpu max='240'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <iothreads supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <os supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='firmware'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <loader supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>rom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pflash</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='readonly'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>yes</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='secure'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>no</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </loader>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </os>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-passthrough' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='hostPassthroughMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='maximum' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='maximumMigratable'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>on</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>off</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='host-model' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <vendor>AMD</vendor>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='x2apic'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='hypervisor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='stibp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='overflow-recov'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='succor'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lbrv'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='tsc-scale'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='flushbyasid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pause-filter'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='pfthreshold'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <feature policy='disable' name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <mode name='custom' supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Broadwell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Cooperlake-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Denverton-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Dhyana-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='auto-ibrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Milan-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amd-psfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='no-nested-data-bp'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='null-sel-clr-base'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='stibp-always-on'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-Rome-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='EPYC-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='GraniteRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-128'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-256'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx10-512'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='prefetchiti'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Haswell-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v6'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Icelake-Server-v7'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='IvyBridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='KnightsMill-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4fmaps'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-4vnniw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512er'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512pf'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G4-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Opteron_G5-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fma4'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tbm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xop'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SapphireRapids-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='amx-tile'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-bf16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-fp16'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512-vpopcntdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bitalg'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vbmi2'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrc'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fzrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='la57'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='taa-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='tsx-ldtrk'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xfd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='SierraForest-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ifma'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-ne-convert'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx-vnni-int8'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='bus-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cmpccxadd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fbsdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='fsrs'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ibrs-all'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mcdt-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pbrsb-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='psdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='sbdr-ssdp-no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='serialize'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vaes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='vpclmulqdq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Client-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='hle'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='rtm'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Skylake-Server-v5'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512bw'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512cd'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512dq'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512f'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='avx512vl'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='invpcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pcid'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='pku'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='mpx'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v2'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v3'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='core-capability'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='split-lock-detect'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='Snowridge-v4'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='cldemote'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='erms'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='gfni'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdir64b'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='movdiri'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='xsaves'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='athlon-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='core2duo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='coreduo-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='n270-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='ss'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <blockers model='phenom-v1'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnow'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <feature name='3dnowext'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </blockers>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </mode>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </cpu>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <memoryBacking supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <enum name='sourceType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>anonymous</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <value>memfd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </memoryBacking>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <disk supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='diskDevice'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>disk</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cdrom</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>floppy</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>lun</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ide</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>fdc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>sata</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </disk>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <graphics supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vnc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egl-headless</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </graphics>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <video supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='modelType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vga</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>cirrus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>none</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>bochs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ramfb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </video>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hostdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='mode'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>subsystem</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='startupPolicy'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>mandatory</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>requisite</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>optional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='subsysType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pci</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>scsi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='capsType'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='pciBackend'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hostdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <rng supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtio-non-transitional</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>random</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>egd</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </rng>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <filesystem supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='driverType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>path</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>handle</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>virtiofs</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </filesystem>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <tpm supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-tis</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tpm-crb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emulator</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>external</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendVersion'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>2.0</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </tpm>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <redirdev supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='bus'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>usb</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </redirdev>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <channel supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </channel>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <crypto supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendModel'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>builtin</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </crypto>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <interface supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='backendType'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>default</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>passt</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </interface>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <panic supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='model'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>isa</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>hyperv</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </panic>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <console supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='type'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>null</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vc</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pty</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dev</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>file</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>pipe</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stdio</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>udp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tcp</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>unix</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>qemu-vdagent</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>dbus</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </console>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </devices>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  <features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <gic supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <vmcoreinfo supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <genid supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backingStoreInput supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <backup supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <async-teardown supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <ps2 supported='yes'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sev supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <sgx supported='no'/>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <hyperv supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='features'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>relaxed</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vapic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>spinlocks</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vpindex</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>runtime</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>synic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>stimer</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reset</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>vendor_id</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>frequencies</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>reenlightenment</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tlbflush</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>ipi</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>avic</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>emsr_bitmap</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>xmm_input</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <spinlocks>4095</spinlocks>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <stimer_direct>on</stimer_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </defaults>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </hyperv>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    <launchSecurity supported='yes'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      <enum name='sectype'>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:        <value>tdx</value>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:      </enum>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:    </launchSecurity>
Dec  2 11:40:01 np0005542546 nova_compute[189459]:  </features>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: </domainCapabilities>
Dec  2 11:40:01 np0005542546 nova_compute[189459]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.134 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.134 189463 INFO nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Secure Boot support detected#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.137 189463 INFO nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.137 189463 INFO nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.147 189463 DEBUG nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.560 189463 WARNING nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.561 189463 DEBUG nova.virt.libvirt.volume.mount [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.609 189463 INFO nova.virt.node [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Determined node identity 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from /var/lib/nova/compute_id#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.669 189463 WARNING nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Compute nodes ['9fd1b4c0-b7de-4b88-8041-4e819fca48c5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  2 11:40:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:40:01.840 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:40:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:40:01.841 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:40:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:40:01.841 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:40:01 np0005542546 nova_compute[189459]: 2025-12-02 16:40:01.926 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.110 189463 WARNING nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.110 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.110 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.110 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.111 189463 DEBUG nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:40:02 np0005542546 systemd[1]: Starting libvirt nodedev daemon...
Dec  2 11:40:02 np0005542546 podman[189757]: 2025-12-02 16:40:02.216919464 +0000 UTC m=+0.059085540 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 11:40:02 np0005542546 systemd[1]: Started libvirt nodedev daemon.
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.470 189463 WARNING nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.471 189463 DEBUG nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6044MB free_disk=72.42776107788086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.471 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.472 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.546 189463 WARNING nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] No compute node record for compute-0.ctlplane.example.com:9fd1b4c0-b7de-4b88-8041-4e819fca48c5: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 could not be found.#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.717 189463 INFO nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.788 189463 DEBUG nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:40:02 np0005542546 nova_compute[189459]: 2025-12-02 16:40:02.788 189463 DEBUG nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:40:03 np0005542546 nova_compute[189459]: 2025-12-02 16:40:03.787 189463 INFO nova.scheduler.client.report [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [req-92e0fb40-d9d9-4045-9d6c-ceab2971f93d] Created resource provider record via placement API for resource provider with UUID 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 and name compute-0.ctlplane.example.com.#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.394 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  2 11:40:04 np0005542546 nova_compute[189459]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.394 189463 INFO nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.396 189463 DEBUG nova.compute.provider_tree [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.396 189463 DEBUG nova.virt.libvirt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 11:40:04 np0005542546 systemd-logind[790]: New session 25 of user zuul.
Dec  2 11:40:04 np0005542546 systemd[1]: Started Session 25 of User zuul.
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.496 189463 DEBUG nova.scheduler.client.report [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Updated inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.497 189463 DEBUG nova.compute.provider_tree [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Updating resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.497 189463 DEBUG nova.compute.provider_tree [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.591 189463 DEBUG nova.compute.provider_tree [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Updating resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.611 189463 DEBUG nova.compute.resource_tracker [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.612 189463 DEBUG oslo_concurrency.lockutils [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.612 189463 DEBUG nova.service [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.724 189463 DEBUG nova.service [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  2 11:40:04 np0005542546 nova_compute[189459]: 2025-12-02 16:40:04.725 189463 DEBUG nova.servicegroup.drivers.db [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  2 11:40:05 np0005542546 python3.9[189953]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:40:06 np0005542546 podman[190011]: 2025-12-02 16:40:06.247242872 +0000 UTC m=+0.069476019 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 11:40:07 np0005542546 python3.9[190129]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:40:07 np0005542546 systemd[1]: Reloading.
Dec  2 11:40:07 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:40:07 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:40:08 np0005542546 python3.9[190314]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:40:08 np0005542546 network[190331]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:40:08 np0005542546 network[190332]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:40:08 np0005542546 network[190333]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:40:11 np0005542546 podman[190419]: 2025-12-02 16:40:11.896597369 +0000 UTC m=+0.162616365 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  2 11:40:13 np0005542546 python3.9[190636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:40:14 np0005542546 python3.9[190789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:14 np0005542546 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 11:40:15 np0005542546 python3.9[190942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:15 np0005542546 python3.9[191094]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:40:16 np0005542546 python3.9[191246]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:40:17 np0005542546 python3.9[191398]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:40:17 np0005542546 systemd[1]: Reloading.
Dec  2 11:40:17 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:40:17 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:40:18 np0005542546 python3.9[191585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:40:19 np0005542546 python3.9[191738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:40:20 np0005542546 python3.9[191888]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:40:21 np0005542546 python3.9[192040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:21 np0005542546 python3.9[192161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693620.5829432-133-99839357448554/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:40:22 np0005542546 python3.9[192313]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  2 11:40:23 np0005542546 python3.9[192465]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  2 11:40:23 np0005542546 nova_compute[189459]: 2025-12-02 16:40:23.727 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:23 np0005542546 nova_compute[189459]: 2025-12-02 16:40:23.749 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:24 np0005542546 python3.9[192618]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 11:40:25 np0005542546 python3.9[192776]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 11:40:26 np0005542546 python3.9[192934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:26 np0005542546 python3.9[193055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693625.806704-201-99061745236095/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:27 np0005542546 python3.9[193205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:27 np0005542546 python3.9[193326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693626.9211383-201-123010336570670/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:28 np0005542546 python3.9[193476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:29 np0005542546 python3.9[193597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693628.0473697-201-264932410094050/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:29 np0005542546 python3.9[193747]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:40:30 np0005542546 python3.9[193899]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:40:30 np0005542546 python3.9[194051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:31 np0005542546 python3.9[194172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693630.4290764-260-179881247154968/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:31 np0005542546 python3.9[194322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:32 np0005542546 python3.9[194398]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:32 np0005542546 podman[194399]: 2025-12-02 16:40:32.480408763 +0000 UTC m=+0.074818726 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 11:40:33 np0005542546 python3.9[194569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:33 np0005542546 python3.9[194690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693632.5466878-260-88201480242534/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:34 np0005542546 python3.9[194840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:34 np0005542546 python3.9[194961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693633.7001102-260-116354029296406/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:35 np0005542546 python3.9[195111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:35 np0005542546 python3.9[195232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693634.8296978-260-214550552599814/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:36 np0005542546 podman[195356]: 2025-12-02 16:40:36.346387655 +0000 UTC m=+0.053244584 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 11:40:36 np0005542546 python3.9[195400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:37 np0005542546 python3.9[195523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693636.0560498-260-54598728020750/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:37 np0005542546 python3.9[195673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:38 np0005542546 python3.9[195794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693637.1967483-260-107135028049605/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:38 np0005542546 python3.9[195944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:39 np0005542546 python3.9[196065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693638.430427-260-17568103578706/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:40 np0005542546 python3.9[196215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:40 np0005542546 auditd[701]: Audit daemon rotating log files
Dec  2 11:40:40 np0005542546 python3.9[196336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693639.6060357-260-101132394016216/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:41 np0005542546 python3.9[196486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:41 np0005542546 python3.9[196607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693640.7362647-260-245611133998498/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:42 np0005542546 podman[196731]: 2025-12-02 16:40:42.250690858 +0000 UTC m=+0.097797423 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  2 11:40:42 np0005542546 python3.9[196776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:42 np0005542546 python3.9[196904]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693641.8826385-260-120949277193856/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:43 np0005542546 python3.9[197054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:44 np0005542546 python3.9[197130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:44 np0005542546 python3.9[197280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:45 np0005542546 python3.9[197356]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:45 np0005542546 python3.9[197506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:46 np0005542546 python3.9[197582]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:47 np0005542546 python3.9[197734]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:47 np0005542546 python3.9[197886]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:48 np0005542546 python3.9[198038]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:40:49 np0005542546 python3.9[198190]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:40:49 np0005542546 systemd[1]: Reloading.
Dec  2 11:40:49 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:40:49 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:40:49 np0005542546 systemd[1]: Listening on Podman API Socket.
Dec  2 11:40:50 np0005542546 python3.9[198381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:50 np0005542546 python3.9[198504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693649.971459-482-38337272364914/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:40:51 np0005542546 python3.9[198580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:40:51 np0005542546 python3.9[198703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693649.971459-482-38337272364914/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:40:52 np0005542546 python3.9[198855]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  2 11:40:53 np0005542546 python3.9[199007]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:40:54 np0005542546 python3[199159]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:40:55 np0005542546 podman[199195]: 2025-12-02 16:40:55.026905592 +0000 UTC m=+0.055089283 container create 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  2 11:40:55 np0005542546 podman[199195]: 2025-12-02 16:40:55.000215088 +0000 UTC m=+0.028398779 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  2 11:40:55 np0005542546 python3[199159]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  2 11:40:55 np0005542546 python3.9[199385]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:40:56 np0005542546 python3.9[199539]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:57 np0005542546 python3.9[199690]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693656.7056768-546-161978675062710/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:40:58 np0005542546 python3.9[199766]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:40:58 np0005542546 systemd[1]: Reloading.
Dec  2 11:40:58 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:40:58 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.413 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.414 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.414 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.428 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.430 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:40:59 np0005542546 python3.9[199878]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.457 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.458 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.458 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.459 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:40:59 np0005542546 systemd[1]: Reloading.
Dec  2 11:40:59 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:40:59 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.652 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.653 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6036MB free_disk=72.43152618408203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.653 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.654 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.714 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.715 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.741 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.753 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.754 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:40:59 np0005542546 nova_compute[189459]: 2025-12-02 16:40:59.755 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:40:59 np0005542546 systemd[1]: Starting ceilometer_agent_compute container...
Dec  2 11:40:59 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:40:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:40:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:40:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 11:40:59 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 11:40:59 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.
Dec  2 11:40:59 np0005542546 podman[199918]: 2025-12-02 16:40:59.980580935 +0000 UTC m=+0.146003276 container init 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  2 11:40:59 np0005542546 ceilometer_agent_compute[199934]: + sudo -E kolla_set_configs
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: sudo: unable to send audit message: Operation not permitted
Dec  2 11:41:00 np0005542546 podman[199918]: 2025-12-02 16:41:00.017111495 +0000 UTC m=+0.182533766 container start 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 11:41:00 np0005542546 podman[199918]: ceilometer_agent_compute
Dec  2 11:41:00 np0005542546 systemd[1]: Started ceilometer_agent_compute container.
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Validating config file
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Copying service configuration files
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: INFO:__main__:Writing out command to execute
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: ++ cat /run_command
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + ARGS=
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + sudo kolla_copy_cacerts
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: sudo: unable to send audit message: Operation not permitted
Dec  2 11:41:00 np0005542546 podman[199941]: 2025-12-02 16:41:00.119553397 +0000 UTC m=+0.055950825 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + [[ ! -n '' ]]
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + . kolla_extend_start
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + umask 0022
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  2 11:41:00 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-335a1dd395eefffd.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:41:00 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-335a1dd395eefffd.service: Failed with result 'exit-code'.
Dec  2 11:41:00 np0005542546 python3.9[200118]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.940 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.940 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.941 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.942 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.943 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.944 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.945 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.946 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.947 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.948 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.949 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.957 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.983 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.984 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.985 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.986 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.987 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.988 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.989 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.990 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.991 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.992 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.993 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.994 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.995 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.996 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:00 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:00.997 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.000 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.002 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.003 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.016 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.118 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.118 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.118 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.200 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.209 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.210 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.210 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.325 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.326 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.327 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.328 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.329 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.330 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.331 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.332 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.333 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.334 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.335 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.336 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.337 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.338 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.339 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[199934]: 2025-12-02 16:41:01.351 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  2 11:41:01 np0005542546 virtqemud[189206]: End of file while reading data: Input/output error
Dec  2 11:41:01 np0005542546 systemd[1]: libpod-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Deactivated successfully.
Dec  2 11:41:01 np0005542546 podman[200122]: 2025-12-02 16:41:01.550130901 +0000 UTC m=+0.584411821 container died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 11:41:01 np0005542546 systemd[1]: libpod-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Consumed 1.565s CPU time.
Dec  2 11:41:01 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-335a1dd395eefffd.timer: Deactivated successfully.
Dec  2 11:41:01 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.
Dec  2 11:41:01 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-userdata-shm.mount: Deactivated successfully.
Dec  2 11:41:01 np0005542546 systemd[1]: var-lib-containers-storage-overlay-18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7-merged.mount: Deactivated successfully.
Dec  2 11:41:01 np0005542546 podman[200122]: 2025-12-02 16:41:01.597691607 +0000 UTC m=+0.631972497 container cleanup 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 11:41:01 np0005542546 podman[200122]: ceilometer_agent_compute
Dec  2 11:41:01 np0005542546 podman[200161]: ceilometer_agent_compute
Dec  2 11:41:01 np0005542546 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  2 11:41:01 np0005542546 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  2 11:41:01 np0005542546 systemd[1]: Starting ceilometer_agent_compute container...
Dec  2 11:41:01 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:41:01 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:01 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:01 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:01 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a64a5d4496da2b0248d60ad027e350c66b8322543c1f535fba113a3940f7d7/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:01 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.
Dec  2 11:41:01 np0005542546 podman[200174]: 2025-12-02 16:41:01.84018388 +0000 UTC m=+0.134249771 container init 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  2 11:41:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:41:01.841 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:41:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:41:01.843 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:41:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:41:01.843 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + sudo -E kolla_set_configs
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: sudo: unable to send audit message: Operation not permitted
Dec  2 11:41:01 np0005542546 podman[200174]: 2025-12-02 16:41:01.867259633 +0000 UTC m=+0.161325524 container start 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  2 11:41:01 np0005542546 podman[200174]: ceilometer_agent_compute
Dec  2 11:41:01 np0005542546 systemd[1]: Started ceilometer_agent_compute container.
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Validating config file
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Copying service configuration files
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: INFO:__main__:Writing out command to execute
Dec  2 11:41:01 np0005542546 podman[200196]: 2025-12-02 16:41:01.930252181 +0000 UTC m=+0.054608071 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: ++ cat /run_command
Dec  2 11:41:01 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:41:01 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Failed with result 'exit-code'.
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + ARGS=
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + sudo kolla_copy_cacerts
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: sudo: unable to send audit message: Operation not permitted
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + [[ ! -n '' ]]
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + . kolla_extend_start
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + umask 0022
Dec  2 11:41:01 np0005542546 ceilometer_agent_compute[200189]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  2 11:41:02 np0005542546 python3.9[200372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.772 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.772 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.772 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.772 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.773 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.774 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.775 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.777 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.778 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.779 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.780 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.781 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.783 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.784 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.785 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.786 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.788 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.788 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.788 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.788 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.810 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.810 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.810 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.810 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.810 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.811 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.812 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.813 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.814 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.815 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.816 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.817 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.818 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.819 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.820 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.821 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.823 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.825 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.826 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.844 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.869 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.869 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 11:41:02 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:02.870 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 11:41:02 np0005542546 podman[200455]: 2025-12-02 16:41:02.935234602 +0000 UTC m=+0.059166059 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.009 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.010 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.011 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.012 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.013 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.014 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.016 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.017 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.020 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.026 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.029 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.044 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.044 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.044 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.045 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.049 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:41:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:41:03 np0005542546 python3.9[200525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693662.1032994-578-173226399586457/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:41:03 np0005542546 python3.9[200682]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  2 11:41:04 np0005542546 python3.9[200834]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:41:05 np0005542546 python3[200986]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:41:05 np0005542546 podman[201020]: 2025-12-02 16:41:05.738970925 +0000 UTC m=+0.059239661 container create 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Dec  2 11:41:05 np0005542546 podman[201020]: 2025-12-02 16:41:05.714711105 +0000 UTC m=+0.034979861 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  2 11:41:05 np0005542546 python3[200986]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  2 11:41:06 np0005542546 python3.9[201210]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:41:07 np0005542546 podman[201336]: 2025-12-02 16:41:07.079269601 +0000 UTC m=+0.062694601 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  2 11:41:07 np0005542546 python3.9[201375]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:41:08 np0005542546 python3.9[201532]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693667.3060606-631-120924670057424/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:41:08 np0005542546 python3.9[201608]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:41:08 np0005542546 systemd[1]: Reloading.
Dec  2 11:41:08 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:41:08 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:41:09 np0005542546 python3.9[201719]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:41:09 np0005542546 systemd[1]: Reloading.
Dec  2 11:41:09 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:41:09 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:41:09 np0005542546 systemd[1]: Starting node_exporter container...
Dec  2 11:41:09 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:41:09 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543411b972c6856356d406010dfcf1401398629560d736085679caa5d10127b3/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:09 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543411b972c6856356d406010dfcf1401398629560d736085679caa5d10127b3/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:09 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.
Dec  2 11:41:09 np0005542546 podman[201758]: 2025-12-02 16:41:09.923735674 +0000 UTC m=+0.118048909 container init 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.938Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.938Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.938Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.938Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.938Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=arp
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=bcache
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=bonding
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=cpu
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=edac
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=filefd
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=netclass
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=netdev
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=netstat
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=nfs
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=nvme
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=softnet
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=systemd
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=xfs
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.939Z caller=node_exporter.go:117 level=info collector=zfs
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.940Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  2 11:41:09 np0005542546 node_exporter[201772]: ts=2025-12-02T16:41:09.940Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  2 11:41:09 np0005542546 podman[201758]: 2025-12-02 16:41:09.956880265 +0000 UTC m=+0.151193450 container start 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:41:09 np0005542546 podman[201758]: node_exporter
Dec  2 11:41:09 np0005542546 systemd[1]: Started node_exporter container.
Dec  2 11:41:10 np0005542546 podman[201781]: 2025-12-02 16:41:10.038289981 +0000 UTC m=+0.065083282 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:41:10 np0005542546 python3.9[201958]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:41:10 np0005542546 systemd[1]: Stopping node_exporter container...
Dec  2 11:41:10 np0005542546 systemd[1]: libpod-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope: Deactivated successfully.
Dec  2 11:41:10 np0005542546 podman[201962]: 2025-12-02 16:41:10.856255671 +0000 UTC m=+0.076533131 container died 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:41:10 np0005542546 systemd[1]: 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3-2f24854988b4176f.timer: Deactivated successfully.
Dec  2 11:41:10 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.
Dec  2 11:41:10 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3-userdata-shm.mount: Deactivated successfully.
Dec  2 11:41:10 np0005542546 systemd[1]: var-lib-containers-storage-overlay-543411b972c6856356d406010dfcf1401398629560d736085679caa5d10127b3-merged.mount: Deactivated successfully.
Dec  2 11:41:10 np0005542546 podman[201962]: 2025-12-02 16:41:10.906561058 +0000 UTC m=+0.126838548 container cleanup 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:41:10 np0005542546 podman[201962]: node_exporter
Dec  2 11:41:10 np0005542546 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 11:41:11 np0005542546 podman[201989]: node_exporter
Dec  2 11:41:11 np0005542546 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  2 11:41:11 np0005542546 systemd[1]: Stopped node_exporter container.
Dec  2 11:41:11 np0005542546 systemd[1]: Starting node_exporter container...
Dec  2 11:41:11 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:41:11 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543411b972c6856356d406010dfcf1401398629560d736085679caa5d10127b3/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:11 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543411b972c6856356d406010dfcf1401398629560d736085679caa5d10127b3/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:41:11 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.
Dec  2 11:41:11 np0005542546 podman[202000]: 2025-12-02 16:41:11.18747003 +0000 UTC m=+0.130646397 container init 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.199Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.199Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.199Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.200Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.200Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.200Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.200Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=arp
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=bcache
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=bonding
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=cpu
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=edac
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=filefd
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=netclass
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=netdev
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=netstat
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=nfs
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=nvme
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=softnet
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=systemd
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=xfs
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.201Z caller=node_exporter.go:117 level=info collector=zfs
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.202Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  2 11:41:11 np0005542546 node_exporter[202015]: ts=2025-12-02T16:41:11.202Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  2 11:41:11 np0005542546 podman[202000]: 2025-12-02 16:41:11.217816538 +0000 UTC m=+0.160992895 container start 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:41:11 np0005542546 podman[202000]: node_exporter
Dec  2 11:41:11 np0005542546 systemd[1]: Started node_exporter container.
Dec  2 11:41:11 np0005542546 podman[202024]: 2025-12-02 16:41:11.274976484 +0000 UTC m=+0.048058810 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:41:11 np0005542546 python3.9[202199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:41:12 np0005542546 python3.9[202322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693671.4268768-663-158146952013609/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:41:12 np0005542546 podman[202323]: 2025-12-02 16:41:12.55944119 +0000 UTC m=+0.091371096 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 11:41:13 np0005542546 python3.9[202497]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  2 11:41:13 np0005542546 python3.9[202649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:41:14 np0005542546 python3[202801]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:41:32 np0005542546 podman[202829]: 2025-12-02 16:41:32.236334732 +0000 UTC m=+0.062891936 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 11:41:32 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:41:32 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Failed with result 'exit-code'.
Dec  2 11:41:33 np0005542546 podman[202848]: 2025-12-02 16:41:33.241745404 +0000 UTC m=+0.071686877 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 11:41:37 np0005542546 podman[202868]: 2025-12-02 16:41:37.231798909 +0000 UTC m=+0.062981299 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 11:41:42 np0005542546 podman[202887]: 2025-12-02 16:41:42.237232167 +0000 UTC m=+0.065684834 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:41:43 np0005542546 podman[202913]: 2025-12-02 16:41:43.265834572 +0000 UTC m=+0.100090452 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 11:41:59 np0005542546 nova_compute[189459]: 2025-12-02 16:41:59.745 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:41:59 np0005542546 nova_compute[189459]: 2025-12-02 16:41:59.762 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:41:59 np0005542546 nova_compute[189459]: 2025-12-02 16:41:59.762 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:41:59 np0005542546 nova_compute[189459]: 2025-12-02 16:41:59.763 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:41:59 np0005542546 nova_compute[189459]: 2025-12-02 16:41:59.763 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.430 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:00 np0005542546 nova_compute[189459]: 2025-12-02 16:42:00.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.441 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.441 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.639 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.640 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5986MB free_disk=72.42946243286133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.641 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.641 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.716 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.717 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.739 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.753 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.755 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:42:01 np0005542546 nova_compute[189459]: 2025-12-02 16:42:01.756 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:42:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:42:01.842 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:42:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:42:01.843 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:42:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:42:01.843 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:42:03 np0005542546 podman[202940]: 2025-12-02 16:42:03.221406166 +0000 UTC m=+0.054105978 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 11:42:03 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:42:03 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Failed with result 'exit-code'.
Dec  2 11:42:04 np0005542546 podman[202959]: 2025-12-02 16:42:04.268915301 +0000 UTC m=+0.096006535 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 11:42:08 np0005542546 podman[202980]: 2025-12-02 16:42:08.24532366 +0000 UTC m=+0.069230437 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  2 11:42:13 np0005542546 podman[202999]: 2025-12-02 16:42:13.219243718 +0000 UTC m=+0.053411672 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:42:14 np0005542546 podman[203023]: 2025-12-02 16:42:14.249276638 +0000 UTC m=+0.083206391 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  2 11:42:34 np0005542546 podman[203049]: 2025-12-02 16:42:34.233914234 +0000 UTC m=+0.063103742 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=4, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 11:42:34 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:42:34 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Failed with result 'exit-code'.
Dec  2 11:42:35 np0005542546 podman[203068]: 2025-12-02 16:42:35.237141706 +0000 UTC m=+0.068641793 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 11:42:38 np0005542546 podman[202814]: 2025-12-02 16:42:38.091975284 +0000 UTC m=+83.219768443 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  2 11:42:38 np0005542546 podman[203174]: 2025-12-02 16:42:38.237118021 +0000 UTC m=+0.049730173 container create c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:42:38 np0005542546 podman[203174]: 2025-12-02 16:42:38.212734048 +0000 UTC m=+0.025346220 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  2 11:42:38 np0005542546 python3[202801]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  2 11:42:38 np0005542546 podman[203336]: 2025-12-02 16:42:38.837815531 +0000 UTC m=+0.050627187 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  2 11:42:39 np0005542546 python3.9[203383]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:42:39 np0005542546 python3.9[203539]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:42:40 np0005542546 python3.9[203690]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693759.8632987-716-103434750530555/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:42:41 np0005542546 python3.9[203766]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:42:41 np0005542546 systemd[1]: Reloading.
Dec  2 11:42:41 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:42:41 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:42:42 np0005542546 python3.9[203876]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:42:42 np0005542546 systemd[1]: Reloading.
Dec  2 11:42:42 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:42:42 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:42:42 np0005542546 systemd[1]: Starting podman_exporter container...
Dec  2 11:42:42 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:42:42 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322d4045a210d4dd299c074e39374fb298edf565df7e670aa43c2a8a9bcd35c0/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:42 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322d4045a210d4dd299c074e39374fb298edf565df7e670aa43c2a8a9bcd35c0/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:43 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.
Dec  2 11:42:43 np0005542546 podman[203915]: 2025-12-02 16:42:43.248054846 +0000 UTC m=+0.830791312 container init c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.275Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.275Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.275Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.275Z caller=handler.go:105 level=info collector=container
Dec  2 11:42:43 np0005542546 systemd[1]: Starting Podman API Service...
Dec  2 11:42:43 np0005542546 systemd[1]: Started Podman API Service.
Dec  2 11:42:43 np0005542546 podman[203915]: 2025-12-02 16:42:43.309964634 +0000 UTC m=+0.892701050 container start c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:42:43 np0005542546 podman[203915]: podman_exporter
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="Setting parallel job count to 25"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="Using sqlite as database backend"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  2 11:42:43 np0005542546 systemd[1]: Started podman_exporter container.
Dec  2 11:42:43 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:42:43 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  2 11:42:43 np0005542546 podman[203941]: time="2025-12-02T16:42:43Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:42:43 np0005542546 podman[203939]: 2025-12-02 16:42:43.352681558 +0000 UTC m=+0.061517258 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 11:42:43 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:42:43 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.372Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.372Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  2 11:42:43 np0005542546 podman_exporter[203930]: ts=2025-12-02T16:42:43.372Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  2 11:42:43 np0005542546 podman[203956]: 2025-12-02 16:42:43.375501679 +0000 UTC m=+0.055517547 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:42:43 np0005542546 systemd[1]: c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23-5f9b105b0665146.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:42:43 np0005542546 systemd[1]: c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23-5f9b105b0665146.service: Failed with result 'exit-code'.
Dec  2 11:42:44 np0005542546 python3.9[204152]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:42:44 np0005542546 systemd[1]: Stopping podman_exporter container...
Dec  2 11:42:44 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:42:43 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3329 "" "Go-http-client/1.1"
Dec  2 11:42:44 np0005542546 systemd[1]: libpod-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope: Deactivated successfully.
Dec  2 11:42:44 np0005542546 podman[204156]: 2025-12-02 16:42:44.256299721 +0000 UTC m=+0.050351599 container died c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:42:44 np0005542546 systemd[1]: c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23-5f9b105b0665146.timer: Deactivated successfully.
Dec  2 11:42:44 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.
Dec  2 11:42:44 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23-userdata-shm.mount: Deactivated successfully.
Dec  2 11:42:44 np0005542546 systemd[1]: var-lib-containers-storage-overlay-322d4045a210d4dd299c074e39374fb298edf565df7e670aa43c2a8a9bcd35c0-merged.mount: Deactivated successfully.
Dec  2 11:42:44 np0005542546 podman[204177]: 2025-12-02 16:42:44.390186438 +0000 UTC m=+0.090324501 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:42:44 np0005542546 podman[204156]: 2025-12-02 16:42:44.550029019 +0000 UTC m=+0.344080867 container cleanup c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:42:44 np0005542546 podman[204156]: podman_exporter
Dec  2 11:42:44 np0005542546 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 11:42:44 np0005542546 podman[204211]: podman_exporter
Dec  2 11:42:44 np0005542546 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  2 11:42:44 np0005542546 systemd[1]: Stopped podman_exporter container.
Dec  2 11:42:44 np0005542546 systemd[1]: Starting podman_exporter container...
Dec  2 11:42:44 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:42:44 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322d4045a210d4dd299c074e39374fb298edf565df7e670aa43c2a8a9bcd35c0/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:44 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322d4045a210d4dd299c074e39374fb298edf565df7e670aa43c2a8a9bcd35c0/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:44 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.
Dec  2 11:42:44 np0005542546 podman[204224]: 2025-12-02 16:42:44.823071492 +0000 UTC m=+0.139156078 container init c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.842Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.842Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.842Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.842Z caller=handler.go:105 level=info collector=container
Dec  2 11:42:44 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:42:44 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  2 11:42:44 np0005542546 podman[203941]: time="2025-12-02T16:42:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:42:44 np0005542546 podman[204224]: 2025-12-02 16:42:44.86068174 +0000 UTC m=+0.176766276 container start c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:42:44 np0005542546 podman[204224]: podman_exporter
Dec  2 11:42:44 np0005542546 systemd[1]: Started podman_exporter container.
Dec  2 11:42:44 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:42:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19591 "" "Go-http-client/1.1"
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.878Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.878Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  2 11:42:44 np0005542546 podman_exporter[204240]: ts=2025-12-02T16:42:44.879Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  2 11:42:44 np0005542546 podman[204249]: 2025-12-02 16:42:44.960989516 +0000 UTC m=+0.087167475 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:42:45 np0005542546 python3.9[204425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:42:46 np0005542546 python3.9[204548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693765.0646303-748-226305721308016/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:42:46 np0005542546 python3.9[204700]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  2 11:42:47 np0005542546 python3.9[204852]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:42:48 np0005542546 python3[205004]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:42:50 np0005542546 podman[205017]: 2025-12-02 16:42:50.587342576 +0000 UTC m=+2.093465044 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 11:42:50 np0005542546 podman[205114]: 2025-12-02 16:42:50.724246683 +0000 UTC m=+0.048150751 container create dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec  2 11:42:50 np0005542546 podman[205114]: 2025-12-02 16:42:50.698226226 +0000 UTC m=+0.022130314 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 11:42:50 np0005542546 python3[205004]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 11:42:51 np0005542546 python3.9[205303]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:42:52 np0005542546 python3.9[205457]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:42:52 np0005542546 python3.9[205608]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693772.2482858-801-46528620217297/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:42:53 np0005542546 python3.9[205684]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:42:53 np0005542546 systemd[1]: Reloading.
Dec  2 11:42:53 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:42:53 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:42:54 np0005542546 python3.9[205794]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:42:54 np0005542546 systemd[1]: Reloading.
Dec  2 11:42:54 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:42:54 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:42:54 np0005542546 systemd[1]: Starting openstack_network_exporter container...
Dec  2 11:42:54 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:42:54 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:54 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:54 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:54 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.
Dec  2 11:42:54 np0005542546 podman[205836]: 2025-12-02 16:42:54.97656036 +0000 UTC m=+0.120316304 container init dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *bridge.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *coverage.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *datapath.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *iface.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *memory.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *ovnnorthd.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *ovn.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *ovsdbserver.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *pmd_perf.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *pmd_rxq.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: INFO    16:42:54 main.go:48: registering *vswitch.Collector
Dec  2 11:42:54 np0005542546 openstack_network_exporter[205850]: NOTICE  16:42:54 main.go:76: listening on https://:9105/metrics
Dec  2 11:42:55 np0005542546 podman[205836]: 2025-12-02 16:42:55.011836884 +0000 UTC m=+0.155592808 container start dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  2 11:42:55 np0005542546 podman[205836]: openstack_network_exporter
Dec  2 11:42:55 np0005542546 systemd[1]: Started openstack_network_exporter container.
Dec  2 11:42:55 np0005542546 podman[205860]: 2025-12-02 16:42:55.103208032 +0000 UTC m=+0.078100383 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 11:42:55 np0005542546 python3.9[206034]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:42:55 np0005542546 systemd[1]: Stopping openstack_network_exporter container...
Dec  2 11:42:55 np0005542546 systemd[1]: libpod-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope: Deactivated successfully.
Dec  2 11:42:55 np0005542546 podman[206038]: 2025-12-02 16:42:55.882073514 +0000 UTC m=+0.047604126 container died dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41)
Dec  2 11:42:55 np0005542546 systemd[1]: dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de-3db2d5f7a7f19fd5.timer: Deactivated successfully.
Dec  2 11:42:55 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.
Dec  2 11:42:55 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de-userdata-shm.mount: Deactivated successfully.
Dec  2 11:42:55 np0005542546 systemd[1]: var-lib-containers-storage-overlay-7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209-merged.mount: Deactivated successfully.
Dec  2 11:42:56 np0005542546 podman[206038]: 2025-12-02 16:42:56.846488505 +0000 UTC m=+1.012019157 container cleanup dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 11:42:56 np0005542546 podman[206038]: openstack_network_exporter
Dec  2 11:42:56 np0005542546 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 11:42:56 np0005542546 podman[206065]: openstack_network_exporter
Dec  2 11:42:56 np0005542546 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  2 11:42:56 np0005542546 systemd[1]: Stopped openstack_network_exporter container.
Dec  2 11:42:56 np0005542546 systemd[1]: Starting openstack_network_exporter container...
Dec  2 11:42:57 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:42:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:57 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb56e4559a18d40c350f8d99d56187daf933d6a1f5bd9731760e6e9d09f7209/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:42:57 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.
Dec  2 11:42:57 np0005542546 podman[206078]: 2025-12-02 16:42:57.10651848 +0000 UTC m=+0.140041002 container init dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *bridge.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *coverage.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *datapath.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *iface.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *memory.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *ovnnorthd.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *ovn.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *ovsdbserver.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *pmd_perf.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *pmd_rxq.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: INFO    16:42:57 main.go:48: registering *vswitch.Collector
Dec  2 11:42:57 np0005542546 openstack_network_exporter[206093]: NOTICE  16:42:57 main.go:76: listening on https://:9105/metrics
Dec  2 11:42:57 np0005542546 podman[206078]: 2025-12-02 16:42:57.128826887 +0000 UTC m=+0.162349449 container start dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 11:42:57 np0005542546 podman[206078]: openstack_network_exporter
Dec  2 11:42:57 np0005542546 systemd[1]: Started openstack_network_exporter container.
Dec  2 11:42:57 np0005542546 podman[206103]: 2025-12-02 16:42:57.20881922 +0000 UTC m=+0.069663857 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  2 11:42:57 np0005542546 python3.9[206273]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:42:58 np0005542546 python3.9[206425]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  2 11:42:59 np0005542546 python3.9[206590]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:42:59 np0005542546 nova_compute[189459]: 2025-12-02 16:42:59.756 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:42:59 np0005542546 systemd[1]: Started libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope.
Dec  2 11:42:59 np0005542546 podman[206591]: 2025-12-02 16:42:59.865242191 +0000 UTC m=+0.105251260 container exec 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 11:42:59 np0005542546 podman[206591]: 2025-12-02 16:42:59.900905526 +0000 UTC m=+0.140914585 container exec_died 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 11:42:59 np0005542546 systemd[1]: libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope: Deactivated successfully.
Dec  2 11:43:00 np0005542546 nova_compute[189459]: 2025-12-02 16:43:00.404 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:00 np0005542546 nova_compute[189459]: 2025-12-02 16:43:00.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:00 np0005542546 nova_compute[189459]: 2025-12-02 16:43:00.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 11:43:00 np0005542546 python3.9[206775]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:00 np0005542546 systemd[1]: Started libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope.
Dec  2 11:43:00 np0005542546 podman[206776]: 2025-12-02 16:43:00.684487015 +0000 UTC m=+0.063601225 container exec 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:43:00 np0005542546 podman[206776]: 2025-12-02 16:43:00.717730665 +0000 UTC m=+0.096844835 container exec_died 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:43:00 np0005542546 systemd[1]: libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope: Deactivated successfully.
Dec  2 11:43:01 np0005542546 python3.9[206959]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.434 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.435 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.435 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.435 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.476 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.477 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.477 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.478 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.654 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.655 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5876MB free_disk=72.2609748840332GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.655 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.656 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.731 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.732 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.750 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.769 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.771 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:43:01 np0005542546 nova_compute[189459]: 2025-12-02 16:43:01.772 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:43:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:43:01.844 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:43:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:43:01.844 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:43:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:43:01.844 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:43:02 np0005542546 python3.9[207111]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  2 11:43:02 np0005542546 nova_compute[189459]: 2025-12-02 16:43:02.746 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:02 np0005542546 nova_compute[189459]: 2025-12-02 16:43:02.746 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:02 np0005542546 python3.9[207276]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:02 np0005542546 systemd[1]: Started libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope.
Dec  2 11:43:02 np0005542546 podman[207277]: 2025-12-02 16:43:02.976367692 +0000 UTC m=+0.077568909 container exec d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 11:43:03 np0005542546 podman[207277]: 2025-12-02 16:43:03.009819248 +0000 UTC m=+0.111020465 container exec_died d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 11:43:03 np0005542546 systemd[1]: libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope: Deactivated successfully.
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.044 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.045 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d7553d0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:43:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:43:03 np0005542546 python3.9[207461]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:03 np0005542546 systemd[1]: Started libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope.
Dec  2 11:43:03 np0005542546 podman[207462]: 2025-12-02 16:43:03.819129455 +0000 UTC m=+0.072651347 container exec d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 11:43:03 np0005542546 podman[207462]: 2025-12-02 16:43:03.849129588 +0000 UTC m=+0.102651450 container exec_died d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 11:43:03 np0005542546 systemd[1]: libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope: Deactivated successfully.
Dec  2 11:43:04 np0005542546 podman[207647]: 2025-12-02 16:43:04.383109281 +0000 UTC m=+0.047510514 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=5, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm)
Dec  2 11:43:04 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:43:04 np0005542546 systemd[1]: 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c-2032dccc6af3a935.service: Failed with result 'exit-code'.
Dec  2 11:43:04 np0005542546 python3.9[207648]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:05 np0005542546 python3.9[207820]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  2 11:43:05 np0005542546 podman[207957]: 2025-12-02 16:43:05.746244882 +0000 UTC m=+0.069716899 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:43:05 np0005542546 python3.9[208003]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:06 np0005542546 systemd[1]: Started libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope.
Dec  2 11:43:06 np0005542546 podman[208005]: 2025-12-02 16:43:06.021517655 +0000 UTC m=+0.074225699 container exec 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  2 11:43:06 np0005542546 podman[208005]: 2025-12-02 16:43:06.055806103 +0000 UTC m=+0.108514147 container exec_died 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  2 11:43:06 np0005542546 systemd[1]: libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope: Deactivated successfully.
Dec  2 11:43:06 np0005542546 python3.9[208188]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:07 np0005542546 systemd[1]: Started libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope.
Dec  2 11:43:07 np0005542546 podman[208189]: 2025-12-02 16:43:07.022146917 +0000 UTC m=+0.094118762 container exec 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 11:43:07 np0005542546 podman[208189]: 2025-12-02 16:43:07.051289917 +0000 UTC m=+0.123261722 container exec_died 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 11:43:07 np0005542546 systemd[1]: libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope: Deactivated successfully.
Dec  2 11:43:07 np0005542546 python3.9[208368]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:08 np0005542546 python3.9[208520]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  2 11:43:08 np0005542546 podman[208686]: 2025-12-02 16:43:08.978857756 +0000 UTC m=+0.074418654 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  2 11:43:09 np0005542546 python3.9[208687]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:09 np0005542546 systemd[1]: Started libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope.
Dec  2 11:43:09 np0005542546 podman[208706]: 2025-12-02 16:43:09.269003748 +0000 UTC m=+0.130262640 container exec 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec  2 11:43:09 np0005542546 podman[208724]: 2025-12-02 16:43:09.410702544 +0000 UTC m=+0.122411510 container exec_died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  2 11:43:09 np0005542546 podman[208706]: 2025-12-02 16:43:09.429383294 +0000 UTC m=+0.290642206 container exec_died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 11:43:09 np0005542546 systemd[1]: libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Deactivated successfully.
Dec  2 11:43:10 np0005542546 python3.9[208888]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:10 np0005542546 systemd[1]: Started libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope.
Dec  2 11:43:10 np0005542546 podman[208889]: 2025-12-02 16:43:10.15985929 +0000 UTC m=+0.060338527 container exec 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible)
Dec  2 11:43:10 np0005542546 podman[208889]: 2025-12-02 16:43:10.195769382 +0000 UTC m=+0.096248609 container exec_died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 11:43:10 np0005542546 systemd[1]: libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Deactivated successfully.
Dec  2 11:43:11 np0005542546 python3.9[209072]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:11 np0005542546 python3.9[209224]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  2 11:43:12 np0005542546 python3.9[209390]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:12 np0005542546 systemd[1]: Started libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope.
Dec  2 11:43:12 np0005542546 podman[209391]: 2025-12-02 16:43:12.663244552 +0000 UTC m=+0.097803631 container exec 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 11:43:12 np0005542546 podman[209391]: 2025-12-02 16:43:12.693174694 +0000 UTC m=+0.127733683 container exec_died 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:43:12 np0005542546 systemd[1]: libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope: Deactivated successfully.
Dec  2 11:43:13 np0005542546 python3.9[209575]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:13 np0005542546 systemd[1]: Started libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope.
Dec  2 11:43:13 np0005542546 podman[209576]: 2025-12-02 16:43:13.477513412 +0000 UTC m=+0.074661061 container exec 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 11:43:13 np0005542546 podman[209576]: 2025-12-02 16:43:13.508303487 +0000 UTC m=+0.105451116 container exec_died 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 11:43:13 np0005542546 systemd[1]: libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope: Deactivated successfully.
Dec  2 11:43:13 np0005542546 podman[209593]: 2025-12-02 16:43:13.559163239 +0000 UTC m=+0.080908128 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 11:43:14 np0005542546 python3.9[209784]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:14 np0005542546 podman[209908]: 2025-12-02 16:43:14.684560732 +0000 UTC m=+0.087351090 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller)
Dec  2 11:43:14 np0005542546 python3.9[209955]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  2 11:43:15 np0005542546 podman[209978]: 2025-12-02 16:43:15.237863143 +0000 UTC m=+0.063823631 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:43:15 np0005542546 python3.9[210150]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:15 np0005542546 systemd[1]: Started libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope.
Dec  2 11:43:15 np0005542546 podman[210151]: 2025-12-02 16:43:15.923665961 +0000 UTC m=+0.099146287 container exec c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:43:15 np0005542546 podman[210151]: 2025-12-02 16:43:15.955300828 +0000 UTC m=+0.130781064 container exec_died c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:43:15 np0005542546 systemd[1]: libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope: Deactivated successfully.
Dec  2 11:43:16 np0005542546 python3.9[210334]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:16 np0005542546 systemd[1]: Started libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope.
Dec  2 11:43:16 np0005542546 podman[210335]: 2025-12-02 16:43:16.765007926 +0000 UTC m=+0.072932824 container exec c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 11:43:16 np0005542546 podman[210335]: 2025-12-02 16:43:16.797747793 +0000 UTC m=+0.105672671 container exec_died c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:43:16 np0005542546 systemd[1]: libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope: Deactivated successfully.
Dec  2 11:43:17 np0005542546 python3.9[210518]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:18 np0005542546 python3.9[210670]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  2 11:43:18 np0005542546 python3.9[210836]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:18 np0005542546 systemd[1]: Started libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope.
Dec  2 11:43:19 np0005542546 podman[210837]: 2025-12-02 16:43:19.004882701 +0000 UTC m=+0.080361773 container exec dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 11:43:19 np0005542546 podman[210837]: 2025-12-02 16:43:19.039760034 +0000 UTC m=+0.115239096 container exec_died dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 11:43:19 np0005542546 systemd[1]: libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope: Deactivated successfully.
Dec  2 11:43:19 np0005542546 python3.9[211018]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:43:19 np0005542546 systemd[1]: Started libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope.
Dec  2 11:43:19 np0005542546 podman[211019]: 2025-12-02 16:43:19.897437587 +0000 UTC m=+0.075253717 container exec dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6)
Dec  2 11:43:19 np0005542546 podman[211019]: 2025-12-02 16:43:19.928436037 +0000 UTC m=+0.106252167 container exec_died dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 11:43:19 np0005542546 systemd[1]: libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope: Deactivated successfully.
Dec  2 11:43:20 np0005542546 python3.9[211201]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:21 np0005542546 python3.9[211353]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:21 np0005542546 python3.9[211505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:22 np0005542546 python3.9[211628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693801.483005-1082-192248800883728/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:23 np0005542546 python3.9[211780]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:23 np0005542546 python3.9[211932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:24 np0005542546 python3.9[212010]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:25 np0005542546 python3.9[212162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:25 np0005542546 python3.9[212240]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.moioytfq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:26 np0005542546 python3.9[212392]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:26 np0005542546 python3.9[212470]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:27 np0005542546 python3.9[212622]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:27 np0005542546 podman[212747]: 2025-12-02 16:43:27.850169637 +0000 UTC m=+0.072579995 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 11:43:28 np0005542546 python3[212791]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 11:43:28 np0005542546 python3.9[212947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:29 np0005542546 python3.9[213025]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:29 np0005542546 python3.9[213177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:30 np0005542546 python3.9[213255]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:30 np0005542546 python3.9[213407]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:31 np0005542546 python3.9[213485]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:32 np0005542546 python3.9[213637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:32 np0005542546 python3.9[213715]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:33 np0005542546 python3.9[213867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:33 np0005542546 python3.9[213992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693812.7701511-1207-246926798523696/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:34 np0005542546 python3.9[214144]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:35 np0005542546 podman[214268]: 2025-12-02 16:43:35.043452457 +0000 UTC m=+0.063621495 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  2 11:43:35 np0005542546 python3.9[214315]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:35 np0005542546 podman[214442]: 2025-12-02 16:43:35.89665212 +0000 UTC m=+0.060712057 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 11:43:36 np0005542546 python3.9[214487]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:36 np0005542546 python3.9[214642]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:37 np0005542546 python3.9[214795]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:43:38 np0005542546 python3.9[214949]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: ERROR   16:43:38 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: ERROR   16:43:38 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: ERROR   16:43:38 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: ERROR   16:43:38 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: ERROR   16:43:38 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 11:43:38 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:43:38 np0005542546 python3.9[215108]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:39 np0005542546 systemd[1]: session-25.scope: Deactivated successfully.
Dec  2 11:43:39 np0005542546 systemd[1]: session-25.scope: Consumed 1min 40.077s CPU time.
Dec  2 11:43:39 np0005542546 systemd-logind[790]: Session 25 logged out. Waiting for processes to exit.
Dec  2 11:43:39 np0005542546 systemd-logind[790]: Removed session 25.
Dec  2 11:43:39 np0005542546 podman[215133]: 2025-12-02 16:43:39.223305623 +0000 UTC m=+0.054275694 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 11:43:44 np0005542546 podman[215156]: 2025-12-02 16:43:44.247188493 +0000 UTC m=+0.080982474 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:43:44 np0005542546 systemd-logind[790]: New session 26 of user zuul.
Dec  2 11:43:44 np0005542546 systemd[1]: Started Session 26 of User zuul.
Dec  2 11:43:44 np0005542546 podman[215182]: 2025-12-02 16:43:44.829434625 +0000 UTC m=+0.090422564 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:43:45 np0005542546 podman[215333]: 2025-12-02 16:43:45.607555543 +0000 UTC m=+0.099218067 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 11:43:45 np0005542546 python3.9[215378]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:43:45 np0005542546 systemd[1]: Reloading.
Dec  2 11:43:46 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:43:46 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:43:47 np0005542546 python3.9[215570]: ansible-ansible.builtin.service_facts Invoked
Dec  2 11:43:47 np0005542546 network[215587]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 11:43:47 np0005542546 network[215588]: 'network-scripts' will be removed from distribution in near future.
Dec  2 11:43:47 np0005542546 network[215589]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 11:43:51 np0005542546 python3.9[215863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:43:52 np0005542546 python3.9[216017]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:52 np0005542546 python3.9[216169]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:43:53 np0005542546 python3.9[216322]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:54 np0005542546 python3.9[216474]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:43:55 np0005542546 python3.9[216626]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:43:55 np0005542546 systemd[1]: Reloading.
Dec  2 11:43:55 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:43:55 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:43:56 np0005542546 python3.9[216813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:43:57 np0005542546 python3.9[216966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:43:58 np0005542546 podman[217090]: 2025-12-02 16:43:58.006434021 +0000 UTC m=+0.070928317 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 11:43:58 np0005542546 python3.9[217129]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:43:58 np0005542546 python3.9[217289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:43:59 np0005542546 nova_compute[189459]: 2025-12-02 16:43:59.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:43:59 np0005542546 python3.9[217410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693838.3542926-125-269921630470906/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:43:59 np0005542546 podman[203941]: time="2025-12-02T16:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:43:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec  2 11:43:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3410 "" "Go-http-client/1.1"
Dec  2 11:44:00 np0005542546 nova_compute[189459]: 2025-12-02 16:44:00.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:00 np0005542546 python3.9[217564]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 11:44:01 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.437 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.437 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.437 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.438 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:01 np0005542546 nova_compute[189459]: 2025-12-02 16:44:01.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 11:44:01 np0005542546 python3.9[217716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:44:01.845 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:44:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:44:01.845 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:44:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:44:01.845 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:44:02 np0005542546 python3.9[217837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693841.1936207-171-48671486545549/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:02 np0005542546 nova_compute[189459]: 2025-12-02 16:44:02.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:02 np0005542546 python3.9[217987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:03 np0005542546 python3.9[218108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693842.3764286-171-162486038162113/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.432 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.432 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.432 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.433 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.577 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.578 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5854MB free_disk=72.2596549987793GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.578 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.579 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.645 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.646 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.667 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.681 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.682 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:44:03 np0005542546 nova_compute[189459]: 2025-12-02 16:44:03.683 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:44:04 np0005542546 python3.9[218258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:04 np0005542546 python3.9[218379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693843.5721576-171-34122948669377/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:04 np0005542546 nova_compute[189459]: 2025-12-02 16:44:04.683 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:05 np0005542546 python3.9[218529]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:44:05 np0005542546 podman[218530]: 2025-12-02 16:44:05.227637927 +0000 UTC m=+0.059557894 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 11:44:05 np0005542546 python3.9[218701]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:44:06 np0005542546 podman[218780]: 2025-12-02 16:44:06.2445711 +0000 UTC m=+0.063398006 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 11:44:06 np0005542546 python3.9[218874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:07 np0005542546 python3.9[218995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693846.0653007-230-57344686196960/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:07 np0005542546 python3.9[219145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:08 np0005542546 python3.9[219221]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:08 np0005542546 python3.9[219371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:09 np0005542546 python3.9[219492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693848.3987978-230-4293022788484/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:09 np0005542546 podman[219493]: 2025-12-02 16:44:09.374348591 +0000 UTC m=+0.041945676 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:44:09 np0005542546 python3.9[219662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:10 np0005542546 python3.9[219783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693849.4696913-230-5641141296742/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:11 np0005542546 python3.9[219933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:11 np0005542546 python3.9[220054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693850.6794143-230-161222791349937/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:12 np0005542546 python3.9[220204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:13 np0005542546 python3.9[220325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693851.8665833-230-160306046746732/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:13 np0005542546 python3.9[220475]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:14 np0005542546 python3.9[220551]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:14 np0005542546 podman[220675]: 2025-12-02 16:44:14.732721654 +0000 UTC m=+0.069811706 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 11:44:14 np0005542546 python3.9[220727]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:15 np0005542546 podman[220729]: 2025-12-02 16:44:15.064418639 +0000 UTC m=+0.107995811 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:44:15 np0005542546 python3.9[220907]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:15 np0005542546 podman[220908]: 2025-12-02 16:44:15.801384253 +0000 UTC m=+0.064138435 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:44:16 np0005542546 python3.9[221083]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:44:17 np0005542546 python3.9[221236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:18 np0005542546 python3.9[221359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693857.3474762-349-31069081072146/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:44:18 np0005542546 python3.9[221435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:19 np0005542546 python3.9[221558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693857.3474762-349-31069081072146/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:44:20 np0005542546 python3.9[221710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:44:20 np0005542546 python3.9[221833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764693859.640026-349-92413919197489/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 11:44:21 np0005542546 python3.9[221985]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  2 11:44:22 np0005542546 python3.9[222137]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:44:23 np0005542546 python3[222289]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:44:23 np0005542546 podman[222326]: 2025-12-02 16:44:23.958152121 +0000 UTC m=+0.048815409 container create 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:44:23 np0005542546 podman[222326]: 2025-12-02 16:44:23.928989096 +0000 UTC m=+0.019652404 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  2 11:44:23 np0005542546 python3[222289]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  2 11:44:24 np0005542546 python3.9[222516]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:44:25 np0005542546 python3.9[222670]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:26 np0005542546 python3.9[222821]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693865.7705984-427-176055437696565/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:27 np0005542546 python3.9[222897]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:44:27 np0005542546 systemd[1]: Reloading.
Dec  2 11:44:27 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:44:27 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:44:28 np0005542546 podman[223008]: 2025-12-02 16:44:28.250945827 +0000 UTC m=+0.079441122 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350)
Dec  2 11:44:28 np0005542546 python3.9[223007]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:44:28 np0005542546 systemd[1]: Reloading.
Dec  2 11:44:28 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:44:28 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:44:28 np0005542546 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  2 11:44:28 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:44:28 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:28 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:28 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:28 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:28 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.
Dec  2 11:44:28 np0005542546 podman[223068]: 2025-12-02 16:44:28.811765491 +0000 UTC m=+0.127241733 container init 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + sudo -E kolla_set_configs
Dec  2 11:44:28 np0005542546 podman[223068]: 2025-12-02 16:44:28.837063173 +0000 UTC m=+0.152539405 container start 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 11:44:28 np0005542546 podman[223068]: ceilometer_agent_ipmi
Dec  2 11:44:28 np0005542546 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Validating config file
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Copying service configuration files
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: INFO:__main__:Writing out command to execute
Dec  2 11:44:28 np0005542546 podman[223091]: 2025-12-02 16:44:28.903322184 +0000 UTC m=+0.052547658 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: ++ cat /run_command
Dec  2 11:44:28 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-50e59812f0e9c1ad.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:44:28 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-50e59812f0e9c1ad.service: Failed with result 'exit-code'.
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + ARGS=
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + sudo kolla_copy_cacerts
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + [[ ! -n '' ]]
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + . kolla_extend_start
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + umask 0022
Dec  2 11:44:28 np0005542546 ceilometer_agent_ipmi[223084]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  2 11:44:29 np0005542546 python3.9[223267]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  2 11:44:29 np0005542546 podman[203941]: time="2025-12-02T16:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:44:29 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25334 "" "Go-http-client/1.1"
Dec  2 11:44:29 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3839 "" "Go-http-client/1.1"
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.936 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.936 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.937 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.938 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.939 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.941 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.942 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.943 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.944 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.945 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.948 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.949 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.950 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.951 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.952 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.953 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.958 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.958 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.958 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.958 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.978 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.979 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 11:44:29 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:29.980 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.066 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpsz4f8ojq/privsep.sock']
Dec  2 11:44:30 np0005542546 python3.9[223426]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.772 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.773 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsz4f8ojq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.643 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.651 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.655 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.656 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.877 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.878 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.879 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.879 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.879 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.879 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.879 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.880 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.883 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.884 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.885 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.886 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.887 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.888 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.889 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.890 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.891 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.892 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.893 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.894 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.895 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.896 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.897 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.898 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.899 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.900 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.901 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.902 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  2 11:44:30 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:30.905 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  2 11:44:31 np0005542546 python3[223584]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 11:44:31 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:44:31 np0005542546 podman[223622]: 2025-12-02 16:44:31.590984816 +0000 UTC m=+0.062129772 container create 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm)
Dec  2 11:44:31 np0005542546 podman[223622]: 2025-12-02 16:44:31.559207801 +0000 UTC m=+0.030352807 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  2 11:44:31 np0005542546 python3[223584]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  2 11:44:32 np0005542546 python3.9[223813]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:44:33 np0005542546 python3.9[223967]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:33 np0005542546 python3.9[224118]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764693873.3504853-489-270495046811871/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:34 np0005542546 python3.9[224194]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 11:44:34 np0005542546 systemd[1]: Reloading.
Dec  2 11:44:34 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:44:34 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:44:35 np0005542546 python3.9[224306]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 11:44:35 np0005542546 systemd[1]: Reloading.
Dec  2 11:44:35 np0005542546 podman[224308]: 2025-12-02 16:44:35.56381126 +0000 UTC m=+0.078489297 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 11:44:35 np0005542546 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 11:44:35 np0005542546 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 11:44:35 np0005542546 systemd[1]: Starting kepler container...
Dec  2 11:44:35 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:44:36 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.
Dec  2 11:44:36 np0005542546 podman[224366]: 2025-12-02 16:44:36.021987545 +0000 UTC m=+0.148411775 container init 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 11:44:36 np0005542546 kepler[224383]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 11:44:36 np0005542546 podman[224366]: 2025-12-02 16:44:36.059704127 +0000 UTC m=+0.186128357 container start 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.063028       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.063287       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.063309       1 config.go:295] kernel version: 5.14
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.063939       1 power.go:78] Unable to obtain power, use estimate method
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.063964       1 redfish.go:169] failed to get redfish credential file path
Dec  2 11:44:36 np0005542546 podman[224366]: kepler
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.064392       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.064408       1 power.go:79] using none to obtain power
Dec  2 11:44:36 np0005542546 kepler[224383]: E1202 16:44:36.064429       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  2 11:44:36 np0005542546 kepler[224383]: E1202 16:44:36.064467       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  2 11:44:36 np0005542546 kepler[224383]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.068392       1 exporter.go:84] Number of CPUs: 8
Dec  2 11:44:36 np0005542546 systemd[1]: Started kepler container.
Dec  2 11:44:36 np0005542546 podman[224393]: 2025-12-02 16:44:36.163091335 +0000 UTC m=+0.090344392 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 11:44:36 np0005542546 systemd[1]: 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-7bd55a9657c4da0f.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:44:36 np0005542546 systemd[1]: 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-7bd55a9657c4da0f.service: Failed with result 'exit-code'.
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.628067       1 watcher.go:83] Using in cluster k8s config
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.628119       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  2 11:44:36 np0005542546 kepler[224383]: E1202 16:44:36.628192       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  2 11:44:36 np0005542546 podman[224540]: 2025-12-02 16:44:36.628737719 +0000 UTC m=+0.099797043 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.634751       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.634799       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.640454       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.640483       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.648525       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.648559       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.648574       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655195       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655226       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655230       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655234       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655239       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655250       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655324       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655381       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655402       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655418       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655498       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  2 11:44:36 np0005542546 kepler[224383]: I1202 16:44:36.655760       1 exporter.go:208] Started Kepler in 596.524962ms
Dec  2 11:44:36 np0005542546 python3.9[224588]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:44:36 np0005542546 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:37.079 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:37.181 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:37.181 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:37.182 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[223084]: 2025-12-02 16:44:37.192 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  2 11:44:37 np0005542546 systemd[1]: libpod-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope: Deactivated successfully.
Dec  2 11:44:37 np0005542546 systemd[1]: libpod-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope: Consumed 2.347s CPU time.
Dec  2 11:44:37 np0005542546 podman[224602]: 2025-12-02 16:44:37.376322616 +0000 UTC m=+0.369750507 container died 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 11:44:37 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-50e59812f0e9c1ad.timer: Deactivated successfully.
Dec  2 11:44:37 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.
Dec  2 11:44:37 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-userdata-shm.mount: Deactivated successfully.
Dec  2 11:44:37 np0005542546 systemd[1]: var-lib-containers-storage-overlay-d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661-merged.mount: Deactivated successfully.
Dec  2 11:44:37 np0005542546 podman[224602]: 2025-12-02 16:44:37.454214466 +0000 UTC m=+0.447642357 container cleanup 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  2 11:44:37 np0005542546 podman[224602]: ceilometer_agent_ipmi
Dec  2 11:44:37 np0005542546 podman[224628]: ceilometer_agent_ipmi
Dec  2 11:44:37 np0005542546 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  2 11:44:37 np0005542546 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  2 11:44:37 np0005542546 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  2 11:44:37 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:44:37 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:37 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:37 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:37 np0005542546 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1de1655f15117e79bba8a0e1a1e5d2fefec50f0612c6b997c5c00b0d667e661/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 11:44:37 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.
Dec  2 11:44:37 np0005542546 podman[224640]: 2025-12-02 16:44:37.77497565 +0000 UTC m=+0.178257868 container init 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + sudo -E kolla_set_configs
Dec  2 11:44:37 np0005542546 podman[224640]: 2025-12-02 16:44:37.810321858 +0000 UTC m=+0.213604056 container start 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  2 11:44:37 np0005542546 podman[224640]: ceilometer_agent_ipmi
Dec  2 11:44:37 np0005542546 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  2 11:44:37 np0005542546 podman[224661]: 2025-12-02 16:44:37.894073854 +0000 UTC m=+0.073795772 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 11:44:37 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-7805ddd38d33d104.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:44:37 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-7805ddd38d33d104.service: Failed with result 'exit-code'.
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Validating config file
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Copying service configuration files
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: INFO:__main__:Writing out command to execute
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: ++ cat /run_command
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + ARGS=
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + sudo kolla_copy_cacerts
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + [[ ! -n '' ]]
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + . kolla_extend_start
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + umask 0022
Dec  2 11:44:37 np0005542546 ceilometer_agent_ipmi[224655]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  2 11:44:38 np0005542546 python3.9[224838]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 11:44:38 np0005542546 systemd[1]: Stopping kepler container...
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.876 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.877 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.878 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.878 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.879 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.879 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.879 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.880 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.880 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.880 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.881 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.881 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.882 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.882 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.883 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.883 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.883 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.883 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.884 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.885 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.886 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.887 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.888 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.889 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.890 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.891 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.892 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.893 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 kepler[224383]: I1202 16:44:38.893736       1 exporter.go:218] Received shutdown signal
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.894 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 kepler[224383]: I1202 16:44:38.894317       1 exporter.go:226] Exiting...
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.895 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.896 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.897 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.898 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.899 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.900 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.901 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.901 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.901 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.918 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.919 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.920 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 11:44:38 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:38.935 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxkr2l0un/privsep.sock']
Dec  2 11:44:39 np0005542546 systemd[1]: libpod-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope: Deactivated successfully.
Dec  2 11:44:39 np0005542546 conmon[224383]: conmon 67ff5d4c323f417a0572 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope/container/memory.events
Dec  2 11:44:39 np0005542546 podman[224842]: 2025-12-02 16:44:39.076104905 +0000 UTC m=+0.240800560 container died 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec  2 11:44:39 np0005542546 systemd[1]: 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-7bd55a9657c4da0f.timer: Deactivated successfully.
Dec  2 11:44:39 np0005542546 systemd[1]: Stopped /usr/bin/podman healthcheck run 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.
Dec  2 11:44:39 np0005542546 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-userdata-shm.mount: Deactivated successfully.
Dec  2 11:44:39 np0005542546 systemd[1]: var-lib-containers-storage-overlay-f98a1d70e7fb6652dfbf444eaf4408e21db968b4c7f7c968d02e203307e43c54-merged.mount: Deactivated successfully.
Dec  2 11:44:39 np0005542546 podman[224842]: 2025-12-02 16:44:39.121607814 +0000 UTC m=+0.286303439 container cleanup 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Dec  2 11:44:39 np0005542546 podman[224842]: kepler
Dec  2 11:44:39 np0005542546 podman[224875]: kepler
Dec  2 11:44:39 np0005542546 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  2 11:44:39 np0005542546 systemd[1]: Stopped kepler container.
Dec  2 11:44:39 np0005542546 systemd[1]: Starting kepler container...
Dec  2 11:44:39 np0005542546 systemd[1]: Started libcrun container.
Dec  2 11:44:39 np0005542546 systemd[1]: Started /usr/bin/podman healthcheck run 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.
Dec  2 11:44:39 np0005542546 podman[224889]: 2025-12-02 16:44:39.320008527 +0000 UTC m=+0.105301830 container init 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc.)
Dec  2 11:44:39 np0005542546 kepler[224904]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 11:44:39 np0005542546 podman[224889]: 2025-12-02 16:44:39.347273651 +0000 UTC m=+0.132566924 container start 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Dec  2 11:44:39 np0005542546 podman[224889]: kepler
Dec  2 11:44:39 np0005542546 systemd[1]: Started kepler container.
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.356144       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.356340       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.356387       1 config.go:295] kernel version: 5.14
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.357225       1 power.go:78] Unable to obtain power, use estimate method
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.357252       1 redfish.go:169] failed to get redfish credential file path
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.357662       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.357670       1 power.go:79] using none to obtain power
Dec  2 11:44:39 np0005542546 kepler[224904]: E1202 16:44:39.357687       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  2 11:44:39 np0005542546 kepler[224904]: E1202 16:44:39.357713       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  2 11:44:39 np0005542546 kepler[224904]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.359967       1 exporter.go:84] Number of CPUs: 8
Dec  2 11:44:39 np0005542546 podman[224914]: 2025-12-02 16:44:39.426569549 +0000 UTC m=+0.065527373 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  2 11:44:39 np0005542546 systemd[1]: 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-299458f9bab4370.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:44:39 np0005542546 systemd[1]: 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854-299458f9bab4370.service: Failed with result 'exit-code'.
Dec  2 11:44:39 np0005542546 podman[224956]: 2025-12-02 16:44:39.514451244 +0000 UTC m=+0.060862908 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.596 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.596 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxkr2l0un/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.470 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.475 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.477 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.478 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.706 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.706 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.707 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.708 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.709 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.709 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.709 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.712 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.713 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.714 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.715 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.716 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.717 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.718 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.719 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.732 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.733 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.733 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  2 11:44:39 np0005542546 ceilometer_agent_ipmi[224655]: 2025-12-02 16:44:39.736 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.937637       1 watcher.go:83] Using in cluster k8s config
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.937677       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  2 11:44:39 np0005542546 kepler[224904]: E1202 16:44:39.937756       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.943636       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.943678       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.949476       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.949517       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.965764       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.965806       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.965823       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975669       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975707       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975713       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975718       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975725       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975737       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975819       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975845       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975868       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.975890       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.977034       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  2 11:44:39 np0005542546 kepler[224904]: I1202 16:44:39.977568       1 exporter.go:208] Started Kepler in 621.742023ms
Dec  2 11:44:40 np0005542546 python3.9[225115]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 11:44:41 np0005542546 python3.9[225277]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  2 11:44:42 np0005542546 python3.9[225440]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:42 np0005542546 systemd[1]: Started libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope.
Dec  2 11:44:42 np0005542546 podman[225441]: 2025-12-02 16:44:42.5232591 +0000 UTC m=+0.111285939 container exec 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:44:42 np0005542546 podman[225441]: 2025-12-02 16:44:42.556455892 +0000 UTC m=+0.144482701 container exec_died 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  2 11:44:42 np0005542546 systemd[1]: libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope: Deactivated successfully.
Dec  2 11:44:43 np0005542546 python3.9[225624]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:43 np0005542546 systemd[1]: Started libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope.
Dec  2 11:44:43 np0005542546 podman[225625]: 2025-12-02 16:44:43.63522938 +0000 UTC m=+0.118354107 container exec 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 11:44:43 np0005542546 podman[225625]: 2025-12-02 16:44:43.669829309 +0000 UTC m=+0.152953996 container exec_died 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 11:44:43 np0005542546 systemd[1]: libpod-conmon-38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb.scope: Deactivated successfully.
Dec  2 11:44:44 np0005542546 python3.9[225806]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:45 np0005542546 podman[225931]: 2025-12-02 16:44:45.218679108 +0000 UTC m=+0.102700861 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:44:45 np0005542546 podman[225930]: 2025-12-02 16:44:45.229460564 +0000 UTC m=+0.113869707 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 11:44:45 np0005542546 python3.9[225995]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  2 11:44:46 np0005542546 podman[226142]: 2025-12-02 16:44:46.212647102 +0000 UTC m=+0.098962021 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 11:44:46 np0005542546 python3.9[226193]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:46 np0005542546 systemd[1]: Started libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope.
Dec  2 11:44:46 np0005542546 podman[226194]: 2025-12-02 16:44:46.550546821 +0000 UTC m=+0.127542570 container exec d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 11:44:46 np0005542546 podman[226194]: 2025-12-02 16:44:46.585899511 +0000 UTC m=+0.162895180 container exec_died d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 11:44:46 np0005542546 systemd[1]: libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope: Deactivated successfully.
Dec  2 11:44:47 np0005542546 python3.9[226375]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:47 np0005542546 systemd[1]: Started libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope.
Dec  2 11:44:47 np0005542546 podman[226376]: 2025-12-02 16:44:47.636039849 +0000 UTC m=+0.144711705 container exec d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 11:44:47 np0005542546 podman[226376]: 2025-12-02 16:44:47.670249882 +0000 UTC m=+0.178783355 container exec_died d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 11:44:47 np0005542546 systemd[1]: libpod-conmon-d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942.scope: Deactivated successfully.
Dec  2 11:44:48 np0005542546 python3.9[226558]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:49 np0005542546 python3.9[226710]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  2 11:44:50 np0005542546 python3.9[226875]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:50 np0005542546 systemd[1]: Started libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope.
Dec  2 11:44:50 np0005542546 podman[226876]: 2025-12-02 16:44:50.569052055 +0000 UTC m=+0.123771966 container exec 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec  2 11:44:50 np0005542546 podman[226876]: 2025-12-02 16:44:50.601857341 +0000 UTC m=+0.156577232 container exec_died 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 11:44:50 np0005542546 systemd[1]: libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope: Deactivated successfully.
Dec  2 11:44:51 np0005542546 python3.9[227055]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:51 np0005542546 systemd[1]: Started libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope.
Dec  2 11:44:51 np0005542546 podman[227056]: 2025-12-02 16:44:51.579124549 +0000 UTC m=+0.121324521 container exec 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 11:44:51 np0005542546 podman[227056]: 2025-12-02 16:44:51.615745007 +0000 UTC m=+0.157944929 container exec_died 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 11:44:51 np0005542546 systemd[1]: libpod-conmon-92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24.scope: Deactivated successfully.
Dec  2 11:44:52 np0005542546 python3.9[227237]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:53 np0005542546 python3.9[227389]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  2 11:44:54 np0005542546 python3.9[227551]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:54 np0005542546 systemd[1]: Started libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope.
Dec  2 11:44:54 np0005542546 podman[227552]: 2025-12-02 16:44:54.524882406 +0000 UTC m=+0.114663233 container exec 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  2 11:44:54 np0005542546 podman[227552]: 2025-12-02 16:44:54.557839976 +0000 UTC m=+0.147620803 container exec_died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  2 11:44:54 np0005542546 systemd[1]: libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Deactivated successfully.
Dec  2 11:44:55 np0005542546 python3.9[227732]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:55 np0005542546 systemd[1]: Started libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope.
Dec  2 11:44:55 np0005542546 podman[227733]: 2025-12-02 16:44:55.65289068 +0000 UTC m=+0.121299890 container exec 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible)
Dec  2 11:44:55 np0005542546 podman[227733]: 2025-12-02 16:44:55.686109906 +0000 UTC m=+0.154519096 container exec_died 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 11:44:55 np0005542546 systemd[1]: libpod-conmon-842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c.scope: Deactivated successfully.
Dec  2 11:44:56 np0005542546 python3.9[227913]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:44:57 np0005542546 python3.9[228065]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  2 11:44:58 np0005542546 podman[228202]: 2025-12-02 16:44:58.497467925 +0000 UTC m=+0.132827509 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9)
Dec  2 11:44:58 np0005542546 python3.9[228246]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:58 np0005542546 systemd[1]: Started libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope.
Dec  2 11:44:58 np0005542546 podman[228251]: 2025-12-02 16:44:58.798925445 +0000 UTC m=+0.111854418 container exec 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 11:44:58 np0005542546 podman[228251]: 2025-12-02 16:44:58.832534953 +0000 UTC m=+0.145463946 container exec_died 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:44:58 np0005542546 systemd[1]: libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope: Deactivated successfully.
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.434 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.435 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.435 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 11:44:59 np0005542546 nova_compute[189459]: 2025-12-02 16:44:59.448 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:44:59 np0005542546 python3.9[228434]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:44:59 np0005542546 podman[203941]: time="2025-12-02T16:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:44:59 np0005542546 systemd[1]: Started libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope.
Dec  2 11:44:59 np0005542546 podman[228435]: 2025-12-02 16:44:59.77760842 +0000 UTC m=+0.090558109 container exec 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 11:44:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Dec  2 11:44:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4254 "" "Go-http-client/1.1"
Dec  2 11:44:59 np0005542546 podman[228435]: 2025-12-02 16:44:59.815709298 +0000 UTC m=+0.128658967 container exec_died 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 11:44:59 np0005542546 systemd[1]: libpod-conmon-8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3.scope: Deactivated successfully.
Dec  2 11:45:00 np0005542546 nova_compute[189459]: 2025-12-02 16:45:00.464 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:00 np0005542546 python3.9[228617]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 11:45:01 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.439 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:01 np0005542546 nova_compute[189459]: 2025-12-02 16:45:01.439 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:01 np0005542546 python3.9[228770]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  2 11:45:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:45:01.846 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:45:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:45:01.846 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:45:01 np0005542546 ovn_metadata_agent[106830]: 2025-12-02 16:45:01.846 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:45:02 np0005542546 nova_compute[189459]: 2025-12-02 16:45:02.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:02 np0005542546 nova_compute[189459]: 2025-12-02 16:45:02.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:02 np0005542546 nova_compute[189459]: 2025-12-02 16:45:02.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 11:45:02 np0005542546 python3.9[228934]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:02 np0005542546 systemd[1]: Started libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope.
Dec  2 11:45:02 np0005542546 podman[228935]: 2025-12-02 16:45:02.536548379 +0000 UTC m=+0.099400876 container exec c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 11:45:02 np0005542546 podman[228935]: 2025-12-02 16:45:02.569150089 +0000 UTC m=+0.132002576 container exec_died c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:45:02 np0005542546 systemd[1]: libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope: Deactivated successfully.
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.044 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.045 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.049 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 ceilometer_agent_compute[200189]: 2025-12-02 16:45:03.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 11:45:03 np0005542546 python3.9[229118]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.441 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.442 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.443 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 11:45:03 np0005542546 systemd[1]: Started libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope.
Dec  2 11:45:03 np0005542546 podman[229119]: 2025-12-02 16:45:03.532465094 +0000 UTC m=+0.109270919 container exec c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:45:03 np0005542546 podman[229119]: 2025-12-02 16:45:03.565122306 +0000 UTC m=+0.141928111 container exec_died c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 11:45:03 np0005542546 systemd[1]: libpod-conmon-c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23.scope: Deactivated successfully.
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.780 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.782 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5687MB free_disk=72.25884628295898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.783 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 11:45:03 np0005542546 nova_compute[189459]: 2025-12-02 16:45:03.783 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.143 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.143 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.272 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 11:45:04 np0005542546 python3.9[229301]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.380 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.381 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.404 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.439 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.464 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.496 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.497 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 11:45:04 np0005542546 nova_compute[189459]: 2025-12-02 16:45:04.498 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 11:45:05 np0005542546 python3.9[229453]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  2 11:45:06 np0005542546 podman[229590]: 2025-12-02 16:45:06.043875352 +0000 UTC m=+0.092113381 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 11:45:06 np0005542546 python3.9[229636]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:06 np0005542546 systemd[1]: Started libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope.
Dec  2 11:45:06 np0005542546 podman[229638]: 2025-12-02 16:45:06.382233548 +0000 UTC m=+0.124938987 container exec dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 11:45:06 np0005542546 podman[229638]: 2025-12-02 16:45:06.417983272 +0000 UTC m=+0.160688681 container exec_died dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  2 11:45:06 np0005542546 systemd[1]: libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope: Deactivated successfully.
Dec  2 11:45:06 np0005542546 nova_compute[189459]: 2025-12-02 16:45:06.497 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 11:45:07 np0005542546 podman[229792]: 2025-12-02 16:45:07.172332077 +0000 UTC m=+0.088453043 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 11:45:07 np0005542546 python3.9[229839]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:07 np0005542546 systemd[1]: Started libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope.
Dec  2 11:45:07 np0005542546 podman[229840]: 2025-12-02 16:45:07.564186742 +0000 UTC m=+0.164063913 container exec dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Dec  2 11:45:07 np0005542546 podman[229840]: 2025-12-02 16:45:07.600837631 +0000 UTC m=+0.200714762 container exec_died dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec  2 11:45:07 np0005542546 systemd[1]: libpod-conmon-dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de.scope: Deactivated successfully.
Dec  2 11:45:08 np0005542546 podman[229991]: 2025-12-02 16:45:08.242862746 +0000 UTC m=+0.070200346 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 11:45:08 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-7805ddd38d33d104.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 11:45:08 np0005542546 systemd[1]: 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050-7805ddd38d33d104.service: Failed with result 'exit-code'.
Dec  2 11:45:08 np0005542546 python3.9[230041]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:09 np0005542546 python3.9[230193]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  2 11:45:10 np0005542546 podman[230330]: 2025-12-02 16:45:10.083980423 +0000 UTC m=+0.097573896 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 11:45:10 np0005542546 podman[230329]: 2025-12-02 16:45:10.095561362 +0000 UTC m=+0.102418795 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 11:45:10 np0005542546 python3.9[230393]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:10 np0005542546 systemd[1]: Started libpod-conmon-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope.
Dec  2 11:45:10 np0005542546 podman[230394]: 2025-12-02 16:45:10.396811387 +0000 UTC m=+0.101222524 container exec 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 11:45:10 np0005542546 podman[230394]: 2025-12-02 16:45:10.429979693 +0000 UTC m=+0.134390750 container exec_died 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 11:45:10 np0005542546 systemd[1]: libpod-conmon-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope: Deactivated successfully.
Dec  2 11:45:11 np0005542546 python3.9[230572]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:11 np0005542546 systemd[1]: Started libpod-conmon-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope.
Dec  2 11:45:11 np0005542546 podman[230573]: 2025-12-02 16:45:11.394780698 +0000 UTC m=+0.140278567 container exec 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 11:45:11 np0005542546 podman[230573]: 2025-12-02 16:45:11.427875352 +0000 UTC m=+0.173373181 container exec_died 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  2 11:45:11 np0005542546 systemd[1]: libpod-conmon-201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050.scope: Deactivated successfully.
Dec  2 11:45:12 np0005542546 python3.9[230755]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:13 np0005542546 python3.9[230907]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  2 11:45:14 np0005542546 python3.9[231071]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:14 np0005542546 systemd[1]: Started libpod-conmon-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope.
Dec  2 11:45:14 np0005542546 podman[231072]: 2025-12-02 16:45:14.571673278 +0000 UTC m=+0.140424381 container exec 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec  2 11:45:14 np0005542546 podman[231072]: 2025-12-02 16:45:14.580242496 +0000 UTC m=+0.148993609 container exec_died 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm)
Dec  2 11:45:14 np0005542546 systemd[1]: libpod-conmon-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope: Deactivated successfully.
Dec  2 11:45:15 np0005542546 podman[231255]: 2025-12-02 16:45:15.386760465 +0000 UTC m=+0.093332184 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 11:45:15 np0005542546 podman[231254]: 2025-12-02 16:45:15.428464549 +0000 UTC m=+0.138864430 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 11:45:15 np0005542546 python3.9[231256]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 11:45:15 np0005542546 systemd[1]: Started libpod-conmon-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope.
Dec  2 11:45:15 np0005542546 podman[231302]: 2025-12-02 16:45:15.670914923 +0000 UTC m=+0.144787327 container exec 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release-0.7.12=, release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 11:45:15 np0005542546 podman[231302]: 2025-12-02 16:45:15.706533625 +0000 UTC m=+0.180405999 container exec_died 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, name=ubi9, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible)
Dec  2 11:45:16 np0005542546 systemd[1]: libpod-conmon-67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854.scope: Deactivated successfully.
Dec  2 11:45:16 np0005542546 podman[231355]: 2025-12-02 16:45:16.728130507 +0000 UTC m=+0.109739502 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 11:45:17 np0005542546 python3.9[231506]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:18 np0005542546 python3.9[231660]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:19 np0005542546 python3.9[231812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:19 np0005542546 python3.9[231936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764693918.5694528-844-230463942092732/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:20 np0005542546 python3.9[232088]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:21 np0005542546 python3.9[232240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:22 np0005542546 python3.9[232318]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:22 np0005542546 python3.9[232470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:23 np0005542546 python3.9[232548]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lu9pg5ia recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:24 np0005542546 python3.9[232700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:24 np0005542546 python3.9[232778]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:25 np0005542546 python3.9[232930]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:45:26 np0005542546 python3[233083]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 11:45:27 np0005542546 python3.9[233235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:28 np0005542546 python3.9[233313]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:29 np0005542546 podman[233437]: 2025-12-02 16:45:29.167039609 +0000 UTC m=+0.102296853 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  2 11:45:29 np0005542546 python3.9[233478]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:29 np0005542546 podman[203941]: time="2025-12-02T16:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:45:29 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  2 11:45:29 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4250 "" "Go-http-client/1.1"
Dec  2 11:45:29 np0005542546 python3.9[233565]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:30 np0005542546 python3.9[233717]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:31 np0005542546 python3.9[233795]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: 
Dec  2 11:45:31 np0005542546 openstack_network_exporter[206093]: ERROR   16:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 11:45:31 np0005542546 python3.9[233947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:32 np0005542546 python3.9[234025]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:33 np0005542546 python3.9[234177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:34 np0005542546 python3.9[234302]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764693932.8014097-969-176097353013340/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:35 np0005542546 python3.9[234454]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:36 np0005542546 python3.9[234606]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:45:36 np0005542546 podman[234614]: 2025-12-02 16:45:36.282736025 +0000 UTC m=+0.100792393 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:45:37 np0005542546 python3.9[234781]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:37 np0005542546 podman[234905]: 2025-12-02 16:45:37.970287462 +0000 UTC m=+0.123884440 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 11:45:38 np0005542546 python3.9[234951]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:45:38 np0005542546 podman[235077]: 2025-12-02 16:45:38.928265944 +0000 UTC m=+0.091789472 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 11:45:39 np0005542546 python3.9[235119]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 11:45:39 np0005542546 python3.9[235276]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 11:45:40 np0005542546 podman[235306]: 2025-12-02 16:45:40.239244394 +0000 UTC m=+0.068065519 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, container_name=kepler)
Dec  2 11:45:40 np0005542546 podman[235315]: 2025-12-02 16:45:40.257028469 +0000 UTC m=+0.081112347 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 11:45:40 np0005542546 python3.9[235467]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:41 np0005542546 systemd[1]: session-26.scope: Deactivated successfully.
Dec  2 11:45:41 np0005542546 systemd[1]: session-26.scope: Consumed 1min 31.948s CPU time.
Dec  2 11:45:41 np0005542546 systemd-logind[790]: Session 26 logged out. Waiting for processes to exit.
Dec  2 11:45:41 np0005542546 systemd-logind[790]: Removed session 26.
Dec  2 11:45:46 np0005542546 podman[235493]: 2025-12-02 16:45:46.310284552 +0000 UTC m=+0.118461484 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 11:45:46 np0005542546 podman[235492]: 2025-12-02 16:45:46.333959214 +0000 UTC m=+0.157892237 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 11:45:46 np0005542546 systemd-logind[790]: New session 27 of user zuul.
Dec  2 11:45:46 np0005542546 systemd[1]: Started Session 27 of User zuul.
Dec  2 11:45:47 np0005542546 podman[235541]: 2025-12-02 16:45:47.089411739 +0000 UTC m=+0.134243336 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 11:45:48 np0005542546 python3.9[235716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 11:45:49 np0005542546 python3.9[235873]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  2 11:45:50 np0005542546 python3.9[236026]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 11:45:52 np0005542546 python3.9[236110]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 11:45:55 np0005542546 python3.9[236268]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:56 np0005542546 python3.9[236391]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693954.6074069-54-32321748674817/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:57 np0005542546 python3.9[236543]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:58 np0005542546 python3.9[236695]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 11:45:59 np0005542546 python3.9[236818]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764693957.6785722-77-269111580230637/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 11:45:59 np0005542546 podman[236942]: 2025-12-02 16:45:59.731621368 +0000 UTC m=+0.106035217 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350)
Dec  2 11:45:59 np0005542546 podman[203941]: time="2025-12-02T16:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 11:45:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 11:45:59 np0005542546 podman[203941]: @ - - [02/Dec/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4252 "" "Go-http-client/1.1"
Dec  2 16:46:00 compute-0 python3.9[236991]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 16:46:00 compute-0 systemd[1]: Stopping System Logging Service...
Dec  2 16:46:00 compute-0 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  2 16:46:00 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  2 16:46:00 compute-0 systemd[1]: Stopped System Logging Service.
Dec  2 16:46:00 compute-0 systemd[1]: rsyslog.service: Consumed 3.329s CPU time, 7.3M memory peak, read 0B from disk, written 6.1M to disk.
Dec  2 16:46:00 compute-0 systemd[1]: Starting System Logging Service...
Dec  2 16:46:00 compute-0 rsyslogd[236995]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236995" x-info="https://www.rsyslog.com"] start
Dec  2 16:46:00 compute-0 systemd[1]: Started System Logging Service.
Dec  2 16:46:00 compute-0 rsyslogd[236995]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 16:46:00 compute-0 rsyslogd[236995]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  2 16:46:00 compute-0 rsyslogd[236995]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  2 16:46:00 compute-0 rsyslogd[236995]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  2 16:46:00 compute-0 rsyslogd[236995]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  2 16:46:00 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec  2 16:46:00 compute-0 systemd[1]: session-27.scope: Consumed 11.221s CPU time.
Dec  2 16:46:00 compute-0 systemd-logind[790]: Session 27 logged out. Waiting for processes to exit.
Dec  2 16:46:00 compute-0 systemd-logind[790]: Removed session 27.
Dec  2 16:46:01 compute-0 nova_compute[189459]: 2025-12-02 16:46:01.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:01 compute-0 nova_compute[189459]: 2025-12-02 16:46:01.413 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: ERROR   16:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:46:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:46:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:46:01.847 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:46:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:46:01.848 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:46:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:46:01.849 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:46:02 compute-0 nova_compute[189459]: 2025-12-02 16:46:02.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:02 compute-0 nova_compute[189459]: 2025-12-02 16:46:02.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:02 compute-0 nova_compute[189459]: 2025-12-02 16:46:02.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:46:03 compute-0 nova_compute[189459]: 2025-12-02 16:46:03.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:03 compute-0 nova_compute[189459]: 2025-12-02 16:46:03.426 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:03 compute-0 nova_compute[189459]: 2025-12-02 16:46:03.427 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:46:03 compute-0 nova_compute[189459]: 2025-12-02 16:46:03.427 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:46:03 compute-0 nova_compute[189459]: 2025-12-02 16:46:03.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 16:46:04 compute-0 nova_compute[189459]: 2025-12-02 16:46:04.437 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.437 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.437 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.438 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.823 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.824 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.25408935546875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.825 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.825 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.905 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.905 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.932 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.947 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.949 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:46:05 compute-0 nova_compute[189459]: 2025-12-02 16:46:05.950 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:46:06 compute-0 nova_compute[189459]: 2025-12-02 16:46:06.948 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:46:07 compute-0 podman[237024]: 2025-12-02 16:46:07.275570437 +0000 UTC m=+0.110237648 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  2 16:46:08 compute-0 podman[237043]: 2025-12-02 16:46:08.286132782 +0000 UTC m=+0.113376363 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec  2 16:46:09 compute-0 podman[237063]: 2025-12-02 16:46:09.258613008 +0000 UTC m=+0.088380084 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 16:46:11 compute-0 podman[237084]: 2025-12-02 16:46:11.227656195 +0000 UTC m=+0.057875289 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 16:46:11 compute-0 podman[237083]: 2025-12-02 16:46:11.237235501 +0000 UTC m=+0.071118433 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git)
Dec  2 16:46:17 compute-0 podman[237119]: 2025-12-02 16:46:17.268614123 +0000 UTC m=+0.082352243 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:46:17 compute-0 podman[237120]: 2025-12-02 16:46:17.272448376 +0000 UTC m=+0.095412493 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 16:46:17 compute-0 podman[237118]: 2025-12-02 16:46:17.320796019 +0000 UTC m=+0.139920113 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  2 16:46:29 compute-0 podman[203941]: time="2025-12-02T16:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:46:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:46:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4268 "" "Go-http-client/1.1"
Dec  2 16:46:30 compute-0 podman[237188]: 2025-12-02 16:46:30.246564661 +0000 UTC m=+0.074677548 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: ERROR   16:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:46:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:46:35 compute-0 systemd-logind[790]: New session 28 of user zuul.
Dec  2 16:46:35 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec  2 16:46:37 compute-0 python3[237386]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 16:46:38 compute-0 podman[237459]: 2025-12-02 16:46:38.286949139 +0000 UTC m=+0.112135630 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  2 16:46:38 compute-0 podman[237502]: 2025-12-02 16:46:38.461004384 +0000 UTC m=+0.120683469 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 16:46:39 compute-0 python3[237646]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 16:46:40 compute-0 podman[237774]: 2025-12-02 16:46:40.133095378 +0000 UTC m=+0.119238580 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:46:40 compute-0 python3[237821]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 16:46:42 compute-0 podman[237923]: 2025-12-02 16:46:42.288714465 +0000 UTC m=+0.114211746 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  2 16:46:42 compute-0 podman[237925]: 2025-12-02 16:46:42.299428401 +0000 UTC m=+0.111529914 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  2 16:46:42 compute-0 python3[238007]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 16:46:43 compute-0 python3[238161]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 16:46:45 compute-0 python3[238386]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 16:46:46 compute-0 python3[238551]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 16:46:48 compute-0 podman[238594]: 2025-12-02 16:46:48.274758644 +0000 UTC m=+0.090758199 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 16:46:48 compute-0 podman[238592]: 2025-12-02 16:46:48.297470641 +0000 UTC m=+0.121141031 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 16:46:48 compute-0 podman[238593]: 2025-12-02 16:46:48.315222916 +0000 UTC m=+0.128494747 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:46:59 compute-0 podman[203941]: time="2025-12-02T16:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:46:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:46:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec  2 16:47:01 compute-0 podman[238663]: 2025-12-02 16:47:01.323968354 +0000 UTC m=+0.140009956 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: ERROR   16:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:47:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:47:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:47:01.849 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:47:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:47:01.851 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:47:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:47:01.851 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:47:02 compute-0 nova_compute[189459]: 2025-12-02 16:47:02.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.045 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.046 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.049 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1fd0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:47:03.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:47:03 compute-0 nova_compute[189459]: 2025-12-02 16:47:03.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:04 compute-0 nova_compute[189459]: 2025-12-02 16:47:04.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:04 compute-0 nova_compute[189459]: 2025-12-02 16:47:04.413 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:04 compute-0 nova_compute[189459]: 2025-12-02 16:47:04.413 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.426 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 16:47:05 compute-0 nova_compute[189459]: 2025-12-02 16:47:05.426 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.436 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.808 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.809 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5699MB free_disk=72.25410842895508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.809 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.810 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.882 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.883 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.906 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.922 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.924 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:47:06 compute-0 nova_compute[189459]: 2025-12-02 16:47:06.924 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:47:07 compute-0 nova_compute[189459]: 2025-12-02 16:47:07.923 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:47:09 compute-0 podman[238686]: 2025-12-02 16:47:09.260030584 +0000 UTC m=+0.078824676 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 16:47:09 compute-0 podman[238685]: 2025-12-02 16:47:09.264178196 +0000 UTC m=+0.089844294 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 16:47:11 compute-0 podman[238720]: 2025-12-02 16:47:11.28419959 +0000 UTC m=+0.109528474 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:47:13 compute-0 podman[238741]: 2025-12-02 16:47:13.263548637 +0000 UTC m=+0.084450728 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:47:13 compute-0 podman[238740]: 2025-12-02 16:47:13.286932718 +0000 UTC m=+0.102797994 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:47:19 compute-0 podman[238779]: 2025-12-02 16:47:19.363288568 +0000 UTC m=+0.074296405 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:47:19 compute-0 podman[238778]: 2025-12-02 16:47:19.384814978 +0000 UTC m=+0.100697596 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:47:19 compute-0 podman[238777]: 2025-12-02 16:47:19.427212092 +0000 UTC m=+0.146862592 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:47:29 compute-0 podman[203941]: time="2025-12-02T16:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:47:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:47:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4262 "" "Go-http-client/1.1"
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: ERROR   16:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:47:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:47:32 compute-0 podman[238849]: 2025-12-02 16:47:32.281006059 +0000 UTC m=+0.111895279 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 16:47:40 compute-0 podman[238871]: 2025-12-02 16:47:40.243560072 +0000 UTC m=+0.073942195 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:47:40 compute-0 podman[238870]: 2025-12-02 16:47:40.275655968 +0000 UTC m=+0.110442150 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 16:47:42 compute-0 podman[238909]: 2025-12-02 16:47:42.236791603 +0000 UTC m=+0.072835195 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  2 16:47:44 compute-0 podman[238930]: 2025-12-02 16:47:44.253536128 +0000 UTC m=+0.076190926 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_id=edpm, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec  2 16:47:44 compute-0 podman[238931]: 2025-12-02 16:47:44.261632536 +0000 UTC m=+0.086234456 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 16:47:46 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec  2 16:47:46 compute-0 systemd[1]: session-28.scope: Consumed 9.262s CPU time.
Dec  2 16:47:46 compute-0 systemd-logind[790]: Session 28 logged out. Waiting for processes to exit.
Dec  2 16:47:46 compute-0 systemd-logind[790]: Removed session 28.
Dec  2 16:47:50 compute-0 podman[238969]: 2025-12-02 16:47:50.288690679 +0000 UTC m=+0.110828630 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 16:47:50 compute-0 podman[238975]: 2025-12-02 16:47:50.294913367 +0000 UTC m=+0.107126950 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 16:47:50 compute-0 podman[238968]: 2025-12-02 16:47:50.32356913 +0000 UTC m=+0.154921059 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  2 16:47:59 compute-0 podman[203941]: time="2025-12-02T16:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:47:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:47:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: ERROR   16:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:48:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:48:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:01.852 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:48:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:01.852 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:48:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:01.852 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:48:03 compute-0 podman[239038]: 2025-12-02 16:48:03.242639291 +0000 UTC m=+0.071545972 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 16:48:03 compute-0 nova_compute[189459]: 2025-12-02 16:48:03.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:04 compute-0 nova_compute[189459]: 2025-12-02 16:48:04.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:04 compute-0 nova_compute[189459]: 2025-12-02 16:48:04.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.424 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.424 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:05 compute-0 nova_compute[189459]: 2025-12-02 16:48:05.425 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:48:06 compute-0 nova_compute[189459]: 2025-12-02 16:48:06.421 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:06 compute-0 nova_compute[189459]: 2025-12-02 16:48:06.422 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:07 compute-0 nova_compute[189459]: 2025-12-02 16:48:07.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:07 compute-0 nova_compute[189459]: 2025-12-02 16:48:07.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.445 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.445 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.445 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.828 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.829 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.25423431396484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.830 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.830 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.885 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.885 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.907 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.926 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.927 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:48:08 compute-0 nova_compute[189459]: 2025-12-02 16:48:08.927 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:48:11 compute-0 podman[239058]: 2025-12-02 16:48:11.265633477 +0000 UTC m=+0.092622694 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 16:48:11 compute-0 podman[239059]: 2025-12-02 16:48:11.268682799 +0000 UTC m=+0.090974641 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:48:13 compute-0 podman[239096]: 2025-12-02 16:48:13.291090933 +0000 UTC m=+0.116389689 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 16:48:14 compute-0 podman[239115]: 2025-12-02 16:48:14.745501302 +0000 UTC m=+0.077393688 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 16:48:14 compute-0 podman[239116]: 2025-12-02 16:48:14.770737346 +0000 UTC m=+0.087194320 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 16:48:21 compute-0 podman[239154]: 2025-12-02 16:48:21.238561995 +0000 UTC m=+0.062267654 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:48:21 compute-0 podman[239153]: 2025-12-02 16:48:21.277293389 +0000 UTC m=+0.101214104 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:48:21 compute-0 podman[239152]: 2025-12-02 16:48:21.315751506 +0000 UTC m=+0.143360809 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec  2 16:48:29 compute-0 podman[203941]: time="2025-12-02T16:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:48:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:48:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4259 "" "Go-http-client/1.1"
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: ERROR   16:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:48:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:48:34 compute-0 podman[239220]: 2025-12-02 16:48:34.330990102 +0000 UTC m=+0.146764340 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  2 16:48:42 compute-0 podman[239241]: 2025-12-02 16:48:42.263077235 +0000 UTC m=+0.081198569 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  2 16:48:42 compute-0 podman[239242]: 2025-12-02 16:48:42.267305768 +0000 UTC m=+0.080316606 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 16:48:44 compute-0 podman[239278]: 2025-12-02 16:48:44.295974631 +0000 UTC m=+0.123268823 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:48:45 compute-0 podman[239298]: 2025-12-02 16:48:45.270923633 +0000 UTC m=+0.080941282 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 16:48:45 compute-0 podman[239297]: 2025-12-02 16:48:45.285716908 +0000 UTC m=+0.100716160 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:48:52 compute-0 podman[239334]: 2025-12-02 16:48:52.252101781 +0000 UTC m=+0.071997053 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:48:52 compute-0 podman[239333]: 2025-12-02 16:48:52.25652898 +0000 UTC m=+0.075987490 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:48:52 compute-0 podman[239332]: 2025-12-02 16:48:52.287164448 +0000 UTC m=+0.114522879 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  2 16:48:57 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:57.695 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:48:57 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:57.697 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 16:48:57 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:48:57.698 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:48:59 compute-0 podman[203941]: time="2025-12-02T16:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:48:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:48:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4260 "" "Go-http-client/1.1"
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: ERROR   16:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:49:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:49:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:49:01.854 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:49:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:49:01.855 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:49:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:49:01.855 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.046 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.047 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:49:03.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:49:04 compute-0 nova_compute[189459]: 2025-12-02 16:49:04.928 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:04 compute-0 nova_compute[189459]: 2025-12-02 16:49:04.929 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:05 compute-0 podman[239403]: 2025-12-02 16:49:05.261466961 +0000 UTC m=+0.087286722 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 16:49:05 compute-0 nova_compute[189459]: 2025-12-02 16:49:05.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:06 compute-0 nova_compute[189459]: 2025-12-02 16:49:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:06 compute-0 nova_compute[189459]: 2025-12-02 16:49:06.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:49:07 compute-0 nova_compute[189459]: 2025-12-02 16:49:07.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:07 compute-0 nova_compute[189459]: 2025-12-02 16:49:07.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:07 compute-0 nova_compute[189459]: 2025-12-02 16:49:07.408 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:49:07 compute-0 nova_compute[189459]: 2025-12-02 16:49:07.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:49:07 compute-0 nova_compute[189459]: 2025-12-02 16:49:07.424 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 16:49:08 compute-0 nova_compute[189459]: 2025-12-02 16:49:08.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:08 compute-0 nova_compute[189459]: 2025-12-02 16:49:08.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.457 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.458 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.459 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.460 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.825 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.827 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.25421524047852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.827 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.828 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.892 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.893 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.917 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.935 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.938 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:49:10 compute-0 nova_compute[189459]: 2025-12-02 16:49:10.939 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:49:13 compute-0 podman[239425]: 2025-12-02 16:49:13.260496731 +0000 UTC m=+0.091036392 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 16:49:13 compute-0 podman[239426]: 2025-12-02 16:49:13.286080824 +0000 UTC m=+0.103284139 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  2 16:49:14 compute-0 podman[239464]: 2025-12-02 16:49:14.749694516 +0000 UTC m=+0.079477223 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:49:16 compute-0 podman[239485]: 2025-12-02 16:49:16.267417816 +0000 UTC m=+0.079694639 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Dec  2 16:49:16 compute-0 podman[239484]: 2025-12-02 16:49:16.271041913 +0000 UTC m=+0.095209683 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9)
Dec  2 16:49:23 compute-0 podman[239524]: 2025-12-02 16:49:23.281273886 +0000 UTC m=+0.093152988 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:49:23 compute-0 podman[239523]: 2025-12-02 16:49:23.321087379 +0000 UTC m=+0.152893643 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  2 16:49:23 compute-0 podman[239528]: 2025-12-02 16:49:23.322068756 +0000 UTC m=+0.131275787 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 16:49:29 compute-0 podman[203941]: time="2025-12-02T16:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:49:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:49:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4260 "" "Go-http-client/1.1"
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: ERROR   16:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:49:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:49:36 compute-0 podman[239590]: 2025-12-02 16:49:36.248969613 +0000 UTC m=+0.078826166 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Dec  2 16:49:44 compute-0 podman[239611]: 2025-12-02 16:49:44.229688184 +0000 UTC m=+0.060269900 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 16:49:44 compute-0 podman[239612]: 2025-12-02 16:49:44.235334345 +0000 UTC m=+0.066392644 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  2 16:49:45 compute-0 podman[239651]: 2025-12-02 16:49:45.269000566 +0000 UTC m=+0.093059466 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 16:49:47 compute-0 podman[239673]: 2025-12-02 16:49:47.274181211 +0000 UTC m=+0.097002141 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  2 16:49:47 compute-0 podman[239672]: 2025-12-02 16:49:47.276539204 +0000 UTC m=+0.106650349 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release=1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  2 16:49:54 compute-0 podman[239713]: 2025-12-02 16:49:54.25281757 +0000 UTC m=+0.069553288 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:49:54 compute-0 podman[239712]: 2025-12-02 16:49:54.269750083 +0000 UTC m=+0.086184073 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:49:54 compute-0 podman[239711]: 2025-12-02 16:49:54.286966732 +0000 UTC m=+0.114781456 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  2 16:49:56 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:49:56.470 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:49:56 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:49:56.470 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 16:49:59 compute-0 podman[203941]: time="2025-12-02T16:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:49:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 16:49:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4263 "" "Go-http-client/1.1"
Dec  2 16:50:01 compute-0 nova_compute[189459]: 2025-12-02 16:50:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:01 compute-0 nova_compute[189459]: 2025-12-02 16:50:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: ERROR   16:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:50:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:50:01 compute-0 nova_compute[189459]: 2025-12-02 16:50:01.439 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 16:50:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:01.474 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:01.855 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:01.855 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:01.856 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:05 compute-0 nova_compute[189459]: 2025-12-02 16:50:05.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:05 compute-0 nova_compute[189459]: 2025-12-02 16:50:05.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 16:50:06 compute-0 nova_compute[189459]: 2025-12-02 16:50:06.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:06 compute-0 nova_compute[189459]: 2025-12-02 16:50:06.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:07 compute-0 podman[239783]: 2025-12-02 16:50:07.287156579 +0000 UTC m=+0.111711044 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public)
Dec  2 16:50:07 compute-0 nova_compute[189459]: 2025-12-02 16:50:07.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.427 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.427 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.428 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:50:08 compute-0 nova_compute[189459]: 2025-12-02 16:50:08.428 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.422 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.422 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.695 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.696 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.717 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.874 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.876 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.888 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 16:50:09 compute-0 nova_compute[189459]: 2025-12-02 16:50:09.889 189463 INFO nova.compute.claims [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.018 189463 DEBUG nova.scheduler.client.report [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.065 189463 DEBUG nova.scheduler.client.report [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.065 189463 DEBUG nova.compute.provider_tree [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.086 189463 DEBUG nova.scheduler.client.report [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.118 189463 DEBUG nova.scheduler.client.report [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.164 189463 DEBUG nova.compute.provider_tree [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.178 189463 DEBUG nova.scheduler.client.report [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.212 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.213 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.257 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.258 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.277 189463 INFO nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.326 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.407 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.408 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.409 189463 INFO nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Creating image(s)#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.410 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.410 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.411 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.411 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.412 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.413 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.448 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.448 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.449 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.450 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.825 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.827 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5700MB free_disk=72.2542839050293GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.827 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.828 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.892 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.893 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.893 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.938 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.954 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.978 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:50:10 compute-0 nova_compute[189459]: 2025-12-02 16:50:10.979 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.556 189463 WARNING oslo_policy.policy [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.557 189463 WARNING oslo_policy.policy [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.699 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.800 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.part --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.801 189463 DEBUG nova.virt.images [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] 5b0e8045-c81c-486a-86d2-bf0e0fd17a5a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.803 189463 DEBUG nova.privsep.utils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.804 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.part /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:11 compute-0 nova_compute[189459]: 2025-12-02 16:50:11.972 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.004 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.part /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.converted" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.008 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.085 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.086 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.101 189463 INFO oslo.privsep.daemon [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpouj105wh/privsep.sock']#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.776 189463 INFO oslo.privsep.daemon [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.666 239820 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.673 239820 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.677 239820 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.677 239820 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239820#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.855 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.917 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.919 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.920 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:12 compute-0 nova_compute[189459]: 2025-12-02 16:50:12.943 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.000 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.001 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.049 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.050 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.051 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.110 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.111 189463 DEBUG nova.virt.disk.api [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking if we can resize image /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.112 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.173 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.174 189463 DEBUG nova.virt.disk.api [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Cannot resize image /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.174 189463 DEBUG nova.objects.instance [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'migration_context' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.190 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.191 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.191 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.192 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.193 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.193 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.223 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.224 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.257 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.258 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.270 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.349 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.350 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.351 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.364 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.422 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.423 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.461 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.462 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.463 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.518 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.519 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.519 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Ensure instance console log exists: /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.520 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.521 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.521 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:13 compute-0 nova_compute[189459]: 2025-12-02 16:50:13.695 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Successfully created port: 88cefba1-abc8-4573-900a-031390192acc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 16:50:14 compute-0 podman[239854]: 2025-12-02 16:50:14.816469565 +0000 UTC m=+0.125691416 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 16:50:14 compute-0 podman[239853]: 2025-12-02 16:50:14.819891206 +0000 UTC m=+0.128749588 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.473 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Successfully updated port: 88cefba1-abc8-4573-900a-031390192acc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.495 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.495 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.496 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.676 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.983 189463 DEBUG nova.compute.manager [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-changed-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.984 189463 DEBUG nova.compute.manager [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Refreshing instance network info cache due to event network-changed-88cefba1-abc8-4573-900a-031390192acc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 16:50:15 compute-0 nova_compute[189459]: 2025-12-02 16:50:15.984 189463 DEBUG oslo_concurrency.lockutils [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:50:16 compute-0 podman[239892]: 2025-12-02 16:50:16.750512034 +0000 UTC m=+0.106687769 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.925 189463 DEBUG nova.network.neutron [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.944 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.945 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Instance network_info: |[{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.945 189463 DEBUG oslo_concurrency.lockutils [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.946 189463 DEBUG nova.network.neutron [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Refreshing network info cache for port 88cefba1-abc8-4573-900a-031390192acc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.949 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Start _get_guest_xml network_info=[{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}], 'ephemerals': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 1, 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.956 189463 WARNING nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.961 189463 DEBUG nova.virt.libvirt.host [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.962 189463 DEBUG nova.virt.libvirt.host [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.969 189463 DEBUG nova.virt.libvirt.host [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.970 189463 DEBUG nova.virt.libvirt.host [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.971 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.971 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T16:48:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8aba0aff-301c-4123-b0dc-aba3acd2a3ad',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.972 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.972 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.972 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.973 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.973 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.973 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.974 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.974 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.975 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.975 189463 DEBUG nova.virt.hardware [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.979 189463 DEBUG nova.privsep.utils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.981 189463 DEBUG nova.virt.libvirt.vif [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-4h695zkr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:50:10Z,user_data=None,user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.981 189463 DEBUG nova.network.os_vif_util [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.982 189463 DEBUG nova.network.os_vif_util [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:50:16 compute-0 nova_compute[189459]: 2025-12-02 16:50:16.984 189463 DEBUG nova.objects.instance [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.001 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] End _get_guest_xml xml=<domain type="kvm">
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <uuid>bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a</uuid>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <name>instance-00000001</name>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <memory>524288</memory>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <metadata>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:name>test_0</nova:name>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 16:50:16</nova:creationTime>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:flavor name="m1.small">
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:memory>512</nova:memory>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:ephemeral>1</nova:ephemeral>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:user uuid="91c12bcb1ad14b95b1bdedf7527f1adf">admin</nova:user>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:project uuid="2f96d47197fa40f2a7126bf626847d74">admin</nova:project>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        <nova:port uuid="88cefba1-abc8-4573-900a-031390192acc">
Dec  2 16:50:17 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="192.168.0.223" ipVersion="4"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </metadata>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <system>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="serial">bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="uuid">bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </system>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <os>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </os>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <features>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <apic/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </features>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </clock>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </cpu>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  <devices>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <target dev="vdb" bus="virtio"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.config"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:a3:87:16"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <target dev="tap88cefba1-ab"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </interface>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/console.log" append="off"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </serial>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <video>
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </video>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </rng>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 16:50:17 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 16:50:17 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 16:50:17 compute-0 nova_compute[189459]:  </devices>
Dec  2 16:50:17 compute-0 nova_compute[189459]: </domain>
Dec  2 16:50:17 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.003 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Preparing to wait for external event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.003 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.004 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.005 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.005 189463 DEBUG nova.virt.libvirt.vif [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-4h695zkr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:50:10Z,user_data=None,user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.006 189463 DEBUG nova.network.os_vif_util [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.007 189463 DEBUG nova.network.os_vif_util [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.007 189463 DEBUG os_vif [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.043 189463 DEBUG ovsdbapp.backend.ovs_idl [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.044 189463 DEBUG ovsdbapp.backend.ovs_idl [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.044 189463 DEBUG ovsdbapp.backend.ovs_idl [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.045 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.046 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.046 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.047 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.049 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.051 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.061 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.062 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.062 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.064 189463 INFO oslo.privsep.daemon [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmphy1wkczv/privsep.sock']#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.421 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.861 189463 INFO oslo.privsep.daemon [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.688 239916 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.694 239916 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.697 239916 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  2 16:50:17 compute-0 nova_compute[189459]: 2025-12-02 16:50:17.697 239916 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239916#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.228 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.228 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88cefba1-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.229 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88cefba1-ab, col_values=(('external_ids', {'iface-id': '88cefba1-abc8-4573-900a-031390192acc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:87:16', 'vm-uuid': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.231 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:18 compute-0 NetworkManager[56503]: <info>  [1764694218.2334] manager: (tap88cefba1-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.237 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.242 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.243 189463 INFO os_vif [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab')#033[00m
Dec  2 16:50:18 compute-0 podman[239920]: 2025-12-02 16:50:18.270972942 +0000 UTC m=+0.096996180 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 16:50:18 compute-0 podman[239921]: 2025-12-02 16:50:18.280411394 +0000 UTC m=+0.099757354 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.317 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.318 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.318 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.318 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No VIF found with MAC fa:16:3e:a3:87:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.319 189463 INFO nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Using config drive#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.510 189463 DEBUG nova.network.neutron [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated VIF entry in instance network info cache for port 88cefba1-abc8-4573-900a-031390192acc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.512 189463 DEBUG nova.network.neutron [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.528 189463 DEBUG oslo_concurrency.lockutils [req-885902a5-f7d4-461e-9cdb-b0a64caacecd req-93c05481-2617-4438-b0f2-8d50810003c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.737 189463 INFO nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Creating config drive at /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.config#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.743 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59uu6cma execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.874 189463 DEBUG oslo_concurrency.processutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59uu6cma" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:50:18 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  2 16:50:18 compute-0 kernel: tap88cefba1-ab: entered promiscuous mode
Dec  2 16:50:18 compute-0 ovn_controller[97975]: 2025-12-02T16:50:18Z|00027|binding|INFO|Claiming lport 88cefba1-abc8-4573-900a-031390192acc for this chassis.
Dec  2 16:50:18 compute-0 NetworkManager[56503]: <info>  [1764694218.9945] manager: (tap88cefba1-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec  2 16:50:18 compute-0 ovn_controller[97975]: 2025-12-02T16:50:18Z|00028|binding|INFO|88cefba1-abc8-4573-900a-031390192acc: Claiming fa:16:3e:a3:87:16 192.168.0.223
Dec  2 16:50:18 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.995 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:18.999 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:19 compute-0 systemd-udevd[239982]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 16:50:19 compute-0 NetworkManager[56503]: <info>  [1764694219.0618] device (tap88cefba1-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 16:50:19 compute-0 NetworkManager[56503]: <info>  [1764694219.0641] device (tap88cefba1-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.080 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:19 compute-0 ovn_controller[97975]: 2025-12-02T16:50:19Z|00029|binding|INFO|Setting lport 88cefba1-abc8-4573-900a-031390192acc ovn-installed in OVS
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.090 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.096 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:16 192.168.0.223'], port_security=['fa:16:3e:a3:87:16 192.168.0.223'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.223/24', 'neutron:device_id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=88cefba1-abc8-4573-900a-031390192acc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.098 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 88cefba1-abc8-4573-900a-031390192acc in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d bound to our chassis#033[00m
Dec  2 16:50:19 compute-0 ovn_controller[97975]: 2025-12-02T16:50:19Z|00030|binding|INFO|Setting lport 88cefba1-abc8-4573-900a-031390192acc up in Southbound
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.101 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.103 106835 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpv02tz2q2/privsep.sock']#033[00m
Dec  2 16:50:19 compute-0 systemd-machined[155878]: New machine qemu-1-instance-00000001.
Dec  2 16:50:19 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.576 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694219.5756235, bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.577 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] VM Started (Lifecycle Event)#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.621 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.629 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694219.5757885, bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.630 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] VM Paused (Lifecycle Event)#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.646 189463 DEBUG nova.compute.manager [req-4df03e59-d54b-48ae-a033-cc7f422d8c69 req-e2806fed-a74b-44a7-b844-36d3504b7efd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.647 189463 DEBUG oslo_concurrency.lockutils [req-4df03e59-d54b-48ae-a033-cc7f422d8c69 req-e2806fed-a74b-44a7-b844-36d3504b7efd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.647 189463 DEBUG oslo_concurrency.lockutils [req-4df03e59-d54b-48ae-a033-cc7f422d8c69 req-e2806fed-a74b-44a7-b844-36d3504b7efd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.648 189463 DEBUG oslo_concurrency.lockutils [req-4df03e59-d54b-48ae-a033-cc7f422d8c69 req-e2806fed-a74b-44a7-b844-36d3504b7efd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.648 189463 DEBUG nova.compute.manager [req-4df03e59-d54b-48ae-a033-cc7f422d8c69 req-e2806fed-a74b-44a7-b844-36d3504b7efd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Processing event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.649 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.651 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.657 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.661 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694219.6552293, bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.662 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] VM Resumed (Lifecycle Event)#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.668 189463 INFO nova.virt.libvirt.driver [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Instance spawned successfully.#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.670 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.686 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.693 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.745 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.756 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.757 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.758 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.758 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.759 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.760 189463 DEBUG nova.virt.libvirt.driver [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.816 189463 INFO nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Took 9.41 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.818 189463 DEBUG nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.886 106835 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.893 189463 INFO nova.compute.manager [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Took 10.10 seconds to build instance.#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.893 106835 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpv02tz2q2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.687 240010 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.693 240010 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.696 240010 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.696 240010 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240010#033[00m
Dec  2 16:50:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:19.899 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5416c1ff-40cf-42cb-8646-3f5981a27616]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:19 compute-0 nova_compute[189459]: 2025-12-02 16:50:19.914 189463 DEBUG oslo_concurrency.lockutils [None req-7931960d-5692-448c-b004-dee95d89b721 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:20 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:20.461 240010 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:20 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:20.461 240010 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:20 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:20.461 240010 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.133 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[761b4fe4-3cf5-4476-9083-36eef9f7ccec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.136 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de25f73-f1 in ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.139 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de25f73-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.139 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c1032a1b-6a59-461c-9938-cd44bf0d4e17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.142 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7c51ef10-4c68-4ace-a008-6401b234472d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.171 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[ef23b476-580a-456e-a4ab-a38e17b3fbaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.205 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d1f041-b067-48fd-8fad-e119115e8c3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.207 106835 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpqnnw4ajk/privsep.sock']#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.724 189463 DEBUG nova.compute.manager [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.725 189463 DEBUG oslo_concurrency.lockutils [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.725 189463 DEBUG oslo_concurrency.lockutils [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.726 189463 DEBUG oslo_concurrency.lockutils [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.726 189463 DEBUG nova.compute.manager [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] No waiting events found dispatching network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 16:50:21 compute-0 nova_compute[189459]: 2025-12-02 16:50:21.727 189463 WARNING nova.compute.manager [req-4ba0231a-1425-44ae-9851-5f3e6528a18c req-624fc266-b574-4691-99b3-0bd049e3ebd1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received unexpected event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc for instance with vm_state active and task_state None.#033[00m
Dec  2 16:50:21 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.996 106835 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.998 106835 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqnnw4ajk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  2 16:50:21 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.824 240024 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.829 240024 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.831 240024 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:21.831 240024 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240024#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:22.003 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[a3866c91-04ec-4d8e-9352-3b0a1480bd9c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:22 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  2 16:50:22 compute-0 nova_compute[189459]: 2025-12-02 16:50:22.424 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:22.628 240024 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:22.628 240024 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:22.628 240024 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.232 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.245 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[587b63c2-077b-48ef-96d4-6626540c8404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.272 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e3280b16-2522-45e5-843d-fec379abaccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 NetworkManager[56503]: <info>  [1764694223.2747] manager: (tap0de25f73-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.307 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[319559b9-f6d5-4b19-a252-6fafcc4b74f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.313 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[65f68b9b-7df0-4164-aa94-efaf2f0c6e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 systemd-udevd[240056]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 16:50:23 compute-0 NetworkManager[56503]: <info>  [1764694223.3497] device (tap0de25f73-f0): carrier: link connected
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.358 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[f09860d2-1f08-43d4-b7b2-26518afb60c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.379 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc52fb8-1484-4aaa-a3cb-a07704604b38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 40853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240073, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.395 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c27af9ac-5182-47e5-8676-3b833b507be8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:b463'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377188, 'tstamp': 377188}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240074, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.411 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[3178848e-b874-46b8-acb5-85ff5c19f632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 40853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240075, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.441 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[13f86528-bdbe-4949-b1d6-84ee6e7a03a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.503 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5b50f582-3664-4204-a364-143b5f3cc0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.510 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.511 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.511 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.514 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 NetworkManager[56503]: <info>  [1764694223.5158] manager: (tap0de25f73-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  2 16:50:23 compute-0 kernel: tap0de25f73-f0: entered promiscuous mode
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.520 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.521 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.523 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 ovn_controller[97975]: 2025-12-02T16:50:23Z|00031|binding|INFO|Releasing lport eee37dc5-79f7-4a26-b100-4f955e7030f8 from this chassis (sb_readonly=0)
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.552 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.554 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de25f73-f1ea-4477-bf20-c9bdbb417b7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de25f73-f1ea-4477-bf20-c9bdbb417b7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.556 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.555 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa03e2d-821c-4ee7-9975-75da57eaf8d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.558 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: global
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-0de25f73-f1ea-4477-bf20-c9bdbb417b7d
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/0de25f73-f1ea-4477-bf20-c9bdbb417b7d.pid.haproxy
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 0de25f73-f1ea-4477-bf20-c9bdbb417b7d
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 16:50:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:50:23.562 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'env', 'PROCESS_TAG=haproxy-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de25f73-f1ea-4477-bf20-c9bdbb417b7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.729 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.764 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.765 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.765 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:50:23 compute-0 nova_compute[189459]: 2025-12-02 16:50:23.828 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:50:24 compute-0 podman[240107]: 2025-12-02 16:50:24.035075145 +0000 UTC m=+0.119579533 container create e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 16:50:24 compute-0 podman[240107]: 2025-12-02 16:50:23.945799542 +0000 UTC m=+0.030303930 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 16:50:24 compute-0 systemd[1]: Started libpod-conmon-e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70.scope.
Dec  2 16:50:24 compute-0 systemd[1]: Started libcrun container.
Dec  2 16:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68808cb0c0ad86e442f82eeebd636a4d93f745b23fe002550bffff69807978eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 16:50:24 compute-0 podman[240107]: 2025-12-02 16:50:24.221515372 +0000 UTC m=+0.306019750 container init e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:50:24 compute-0 podman[240107]: 2025-12-02 16:50:24.228900109 +0000 UTC m=+0.313404477 container start e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 16:50:24 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [NOTICE]   (240124) : New worker (240126) forked
Dec  2 16:50:24 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [NOTICE]   (240124) : Loading success.
Dec  2 16:50:25 compute-0 podman[240137]: 2025-12-02 16:50:25.246054751 +0000 UTC m=+0.066727042 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 16:50:25 compute-0 podman[240136]: 2025-12-02 16:50:25.256934922 +0000 UTC m=+0.080254074 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 16:50:25 compute-0 podman[240135]: 2025-12-02 16:50:25.308926099 +0000 UTC m=+0.135703753 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Dec  2 16:50:27 compute-0 nova_compute[189459]: 2025-12-02 16:50:27.427 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:28 compute-0 nova_compute[189459]: 2025-12-02 16:50:28.235 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:29 compute-0 podman[203941]: time="2025-12-02T16:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:50:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:50:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4752 "" "Go-http-client/1.1"
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: ERROR   16:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:50:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:50:32 compute-0 nova_compute[189459]: 2025-12-02 16:50:32.429 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:33 compute-0 nova_compute[189459]: 2025-12-02 16:50:33.237 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:34 compute-0 ovn_controller[97975]: 2025-12-02T16:50:34Z|00032|binding|INFO|Releasing lport eee37dc5-79f7-4a26-b100-4f955e7030f8 from this chassis (sb_readonly=0)
Dec  2 16:50:34 compute-0 nova_compute[189459]: 2025-12-02 16:50:34.711 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7146] manager: (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7200] device (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7312] manager: (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7362] device (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7470] manager: (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec  2 16:50:34 compute-0 ovn_controller[97975]: 2025-12-02T16:50:34Z|00033|binding|INFO|Releasing lport eee37dc5-79f7-4a26-b100-4f955e7030f8 from this chassis (sb_readonly=0)
Dec  2 16:50:34 compute-0 nova_compute[189459]: 2025-12-02 16:50:34.749 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7518] manager: (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7553] device (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  2 16:50:34 compute-0 NetworkManager[56503]: <info>  [1764694234.7585] device (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  2 16:50:34 compute-0 nova_compute[189459]: 2025-12-02 16:50:34.759 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:35 compute-0 nova_compute[189459]: 2025-12-02 16:50:35.075 189463 DEBUG nova.compute.manager [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-changed-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:50:35 compute-0 nova_compute[189459]: 2025-12-02 16:50:35.076 189463 DEBUG nova.compute.manager [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Refreshing instance network info cache due to event network-changed-88cefba1-abc8-4573-900a-031390192acc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 16:50:35 compute-0 nova_compute[189459]: 2025-12-02 16:50:35.077 189463 DEBUG oslo_concurrency.lockutils [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:50:35 compute-0 nova_compute[189459]: 2025-12-02 16:50:35.077 189463 DEBUG oslo_concurrency.lockutils [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:50:35 compute-0 nova_compute[189459]: 2025-12-02 16:50:35.078 189463 DEBUG nova.network.neutron [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Refreshing network info cache for port 88cefba1-abc8-4573-900a-031390192acc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 16:50:36 compute-0 nova_compute[189459]: 2025-12-02 16:50:36.176 189463 DEBUG nova.network.neutron [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated VIF entry in instance network info cache for port 88cefba1-abc8-4573-900a-031390192acc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 16:50:36 compute-0 nova_compute[189459]: 2025-12-02 16:50:36.177 189463 DEBUG nova.network.neutron [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:50:36 compute-0 nova_compute[189459]: 2025-12-02 16:50:36.205 189463 DEBUG oslo_concurrency.lockutils [req-89361675-51e7-4254-b439-c3e1c74aad12 req-4c952fe0-a699-4d22-80dc-c888c3eeacf9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:50:37 compute-0 nova_compute[189459]: 2025-12-02 16:50:37.432 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:38 compute-0 nova_compute[189459]: 2025-12-02 16:50:38.241 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:38 compute-0 podman[240210]: 2025-12-02 16:50:38.291022646 +0000 UTC m=+0.109094494 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git)
Dec  2 16:50:42 compute-0 nova_compute[189459]: 2025-12-02 16:50:42.435 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:43 compute-0 nova_compute[189459]: 2025-12-02 16:50:43.244 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:45 compute-0 podman[240233]: 2025-12-02 16:50:45.268495318 +0000 UTC m=+0.092076729 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 16:50:45 compute-0 podman[240232]: 2025-12-02 16:50:45.293561078 +0000 UTC m=+0.120044026 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  2 16:50:47 compute-0 podman[240265]: 2025-12-02 16:50:47.265142808 +0000 UTC m=+0.088319068 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 16:50:47 compute-0 nova_compute[189459]: 2025-12-02 16:50:47.440 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:48 compute-0 nova_compute[189459]: 2025-12-02 16:50:48.247 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:49 compute-0 podman[240285]: 2025-12-02 16:50:49.255909462 +0000 UTC m=+0.080310475 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:50:49 compute-0 podman[240284]: 2025-12-02 16:50:49.262303413 +0000 UTC m=+0.091548675 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0)
Dec  2 16:50:52 compute-0 nova_compute[189459]: 2025-12-02 16:50:52.445 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:52 compute-0 ovn_controller[97975]: 2025-12-02T16:50:52Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:87:16 192.168.0.223
Dec  2 16:50:52 compute-0 ovn_controller[97975]: 2025-12-02T16:50:52Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:87:16 192.168.0.223
Dec  2 16:50:53 compute-0 nova_compute[189459]: 2025-12-02 16:50:53.250 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:56 compute-0 podman[240336]: 2025-12-02 16:50:56.244411819 +0000 UTC m=+0.065170851 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:50:56 compute-0 podman[240337]: 2025-12-02 16:50:56.28004895 +0000 UTC m=+0.091920255 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:50:56 compute-0 podman[240335]: 2025-12-02 16:50:56.287345915 +0000 UTC m=+0.108509118 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec  2 16:50:57 compute-0 nova_compute[189459]: 2025-12-02 16:50:57.447 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:58 compute-0 nova_compute[189459]: 2025-12-02 16:50:58.253 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:50:59 compute-0 podman[203941]: time="2025-12-02T16:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:50:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:50:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4756 "" "Go-http-client/1.1"
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: ERROR   16:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:51:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:51:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:01.856 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:01.857 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:01.858 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:02 compute-0 nova_compute[189459]: 2025-12-02 16:51:02.449 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.047 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.047 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.058 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 16:51:03 compute-0 nova_compute[189459]: 2025-12-02 16:51:03.256 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:03.417 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.676 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Tue, 02 Dec 2025 16:51:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-50420de1-cb9b-4c7c-a31e-ab14cdc4488f x-openstack-request-id: req-50420de1-cb9b-4c7c-a31e-ab14cdc4488f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.676 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a", "name": "test_0", "status": "ACTIVE", "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "user_id": "91c12bcb1ad14b95b1bdedf7527f1adf", "metadata": {}, "hostId": "037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059", "image": {"id": "5b0e8045-c81c-486a-86d2-bf0e0fd17a5a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"}]}, "flavor": {"id": "8aba0aff-301c-4123-b0dc-aba3acd2a3ad", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8aba0aff-301c-4123-b0dc-aba3acd2a3ad"}]}, "created": "2025-12-02T16:50:07Z", "updated": "2025-12-02T16:50:19Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.223", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a3:87:16"}, {"version": 4, "addr": "192.168.122.218", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a3:87:16"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T16:50:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.676 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a used request id req-50420de1-cb9b-4c7c-a31e-ab14cdc4488f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.678 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T16:51:04.679006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.685 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a / tap88cefba1-ab inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.686 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T16:51:04.687561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T16:51:04.688314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ovn_controller[97975]: 2025-12-02T16:51:04Z|00034|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.716 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.717 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.717 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.718 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T16:51:04.719147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.762 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 32440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.763 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.763 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T16:51:04.764110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.845 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.846 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.847 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.848 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.849 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.849 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.850 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T16:51:04.849235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T16:51:04.853194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.853 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.854 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.855 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.858 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T16:51:04.857733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.859 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.859 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.862 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.863 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.863 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T16:51:04.861952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T16:51:04.865849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.866 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.866 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.867 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.868 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.869 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1950288288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T16:51:04.868888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.869 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.870 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.871 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.872 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T16:51:04.871871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.873 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.874 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.874 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.875 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T16:51:04.873851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.877 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T16:51:04.877642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.879 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.880 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T16:51:04.879683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.880 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.882 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.882 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.882 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T16:51:04.882612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.884 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.885 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T16:51:04.884508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.887 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.888 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T16:51:04.886805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.888 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T16:51:04.888432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T16:51:04.889725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.891 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 1667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T16:51:04.891016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T16:51:04.892314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.892 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T16:51:04.893662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.893 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T16:51:04.894915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T16:51:04.896090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T16:51:04.897347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:51:04.900 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:51:07 compute-0 nova_compute[189459]: 2025-12-02 16:51:07.450 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:08 compute-0 nova_compute[189459]: 2025-12-02 16:51:08.258 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:08 compute-0 nova_compute[189459]: 2025-12-02 16:51:08.445 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:08 compute-0 nova_compute[189459]: 2025-12-02 16:51:08.446 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:08 compute-0 nova_compute[189459]: 2025-12-02 16:51:08.446 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:09 compute-0 podman[240407]: 2025-12-02 16:51:09.331630488 +0000 UTC m=+0.146632365 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Dec  2 16:51:09 compute-0 nova_compute[189459]: 2025-12-02 16:51:09.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:10 compute-0 nova_compute[189459]: 2025-12-02 16:51:10.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:10 compute-0 nova_compute[189459]: 2025-12-02 16:51:10.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:51:10 compute-0 nova_compute[189459]: 2025-12-02 16:51:10.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:51:11 compute-0 nova_compute[189459]: 2025-12-02 16:51:11.500 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:51:11 compute-0 nova_compute[189459]: 2025-12-02 16:51:11.501 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:51:11 compute-0 nova_compute[189459]: 2025-12-02 16:51:11.503 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:51:11 compute-0 nova_compute[189459]: 2025-12-02 16:51:11.503 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:51:12 compute-0 nova_compute[189459]: 2025-12-02 16:51:12.453 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:12 compute-0 nova_compute[189459]: 2025-12-02 16:51:12.968 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.000 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.001 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.002 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.003 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.003 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.004 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.005 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.033 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.034 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.034 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.035 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.132 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.195 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.198 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.259 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.260 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.279 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.354 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.355 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.434 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.889 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.892 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5250MB free_disk=72.20233917236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.893 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.895 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.961 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.961 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:51:13 compute-0 nova_compute[189459]: 2025-12-02 16:51:13.962 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.015 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.067 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updated inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.067 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.068 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.089 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:51:14 compute-0 nova_compute[189459]: 2025-12-02 16:51:14.089 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:16 compute-0 podman[240442]: 2025-12-02 16:51:16.384438521 +0000 UTC m=+0.086443028 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:51:16 compute-0 podman[240441]: 2025-12-02 16:51:16.399256167 +0000 UTC m=+0.099455806 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 16:51:17 compute-0 nova_compute[189459]: 2025-12-02 16:51:17.456 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:18 compute-0 podman[240479]: 2025-12-02 16:51:18.239485252 +0000 UTC m=+0.076195295 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Dec  2 16:51:18 compute-0 nova_compute[189459]: 2025-12-02 16:51:18.282 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:19.005 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:51:19 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:19.007 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 16:51:19 compute-0 nova_compute[189459]: 2025-12-02 16:51:19.011 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:20 compute-0 podman[240500]: 2025-12-02 16:51:20.245913863 +0000 UTC m=+0.075596049 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, release=1214.1726694543, config_id=edpm, distribution-scope=public, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 16:51:20 compute-0 podman[240501]: 2025-12-02 16:51:20.263893923 +0000 UTC m=+0.078783614 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 16:51:22 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:22.010 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:22 compute-0 nova_compute[189459]: 2025-12-02 16:51:22.459 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:23 compute-0 nova_compute[189459]: 2025-12-02 16:51:23.286 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:23 compute-0 nova_compute[189459]: 2025-12-02 16:51:23.924 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:23 compute-0 nova_compute[189459]: 2025-12-02 16:51:23.924 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:23 compute-0 nova_compute[189459]: 2025-12-02 16:51:23.938 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.007 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.008 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.019 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.020 189463 INFO nova.compute.claims [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.149 189463 DEBUG nova.compute.provider_tree [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.163 189463 DEBUG nova.scheduler.client.report [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.193 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.194 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.246 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.247 189463 DEBUG nova.network.neutron [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.272 189463 INFO nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.301 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.376 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.377 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.378 189463 INFO nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Creating image(s)#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.378 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.379 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.380 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.394 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.494 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.495 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.497 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.512 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.588 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.590 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.644 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.646 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.647 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.712 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.713 189463 DEBUG nova.virt.disk.api [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking if we can resize image /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.714 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.773 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.774 189463 DEBUG nova.virt.disk.api [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Cannot resize image /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.775 189463 DEBUG nova.objects.instance [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'migration_context' on Instance uuid 839e5006-8465-4d21-8287-0bba4f28a358 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.789 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.790 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.790 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.808 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.868 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.869 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.870 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.886 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.943 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:24 compute-0 nova_compute[189459]: 2025-12-02 16:51:24.945 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.000 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.002 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.003 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.077 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.079 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.080 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Ensure instance console log exists: /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.081 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.081 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.082 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.543 189463 DEBUG nova.network.neutron [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Successfully updated port: 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.560 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.560 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.560 189463 DEBUG nova.network.neutron [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.625 189463 DEBUG nova.compute.manager [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-changed-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.626 189463 DEBUG nova.compute.manager [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Refreshing instance network info cache due to event network-changed-14dc4429-05ef-4ac6-9fa4-500c0ce93c01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.626 189463 DEBUG oslo_concurrency.lockutils [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:51:25 compute-0 nova_compute[189459]: 2025-12-02 16:51:25.714 189463 DEBUG nova.network.neutron [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.736 189463 DEBUG nova.network.neutron [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.764 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.764 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance network_info: |[{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.765 189463 DEBUG oslo_concurrency.lockutils [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.765 189463 DEBUG nova.network.neutron [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Refreshing network info cache for port 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.771 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Start _get_guest_xml network_info=[{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}], 'ephemerals': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 1, 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.784 189463 WARNING nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.798 189463 DEBUG nova.virt.libvirt.host [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.800 189463 DEBUG nova.virt.libvirt.host [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.810 189463 DEBUG nova.virt.libvirt.host [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.811 189463 DEBUG nova.virt.libvirt.host [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.812 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.813 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T16:48:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8aba0aff-301c-4123-b0dc-aba3acd2a3ad',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.814 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.815 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.817 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.817 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.818 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.819 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.820 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.821 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.821 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.823 189463 DEBUG nova.virt.hardware [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.831 189463 DEBUG nova.virt.libvirt.vif [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:51:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',id=2,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-auysmsw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:51:24Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  2 16:51:26 compute-0 nova_compute[189459]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=839e5006-8465-4d21-8287-0bba4f28a358,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.832 189463 DEBUG nova.network.os_vif_util [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.834 189463 DEBUG nova.network.os_vif_util [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.835 189463 DEBUG nova.objects.instance [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 839e5006-8465-4d21-8287-0bba4f28a358 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.852 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] End _get_guest_xml xml=<domain type="kvm">
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <uuid>839e5006-8465-4d21-8287-0bba4f28a358</uuid>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <name>instance-00000002</name>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <memory>524288</memory>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <metadata>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:name>vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d</nova:name>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 16:51:26</nova:creationTime>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:flavor name="m1.small">
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:memory>512</nova:memory>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:ephemeral>1</nova:ephemeral>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:user uuid="91c12bcb1ad14b95b1bdedf7527f1adf">admin</nova:user>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:project uuid="2f96d47197fa40f2a7126bf626847d74">admin</nova:project>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        <nova:port uuid="14dc4429-05ef-4ac6-9fa4-500c0ce93c01">
Dec  2 16:51:26 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="192.168.0.6" ipVersion="4"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </metadata>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <system>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="serial">839e5006-8465-4d21-8287-0bba4f28a358</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="uuid">839e5006-8465-4d21-8287-0bba4f28a358</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </system>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <os>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </os>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <features>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <apic/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </features>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </clock>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </cpu>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  <devices>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <target dev="vdb" bus="virtio"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.config"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:de:39:f2"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <target dev="tap14dc4429-05"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </interface>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/console.log" append="off"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </serial>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <video>
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </video>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </rng>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 16:51:26 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 16:51:26 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 16:51:26 compute-0 nova_compute[189459]:  </devices>
Dec  2 16:51:26 compute-0 nova_compute[189459]: </domain>
Dec  2 16:51:26 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.853 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Preparing to wait for external event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.853 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.854 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.854 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.855 189463 DEBUG nova.virt.libvirt.vif [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:51:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',id=2,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-auysmsw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:51:24Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  2 16:51:26 compute-0 nova_compute[189459]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=839e5006-8465-4d21-8287-0bba4f28a358,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.856 189463 DEBUG nova.network.os_vif_util [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.857 189463 DEBUG nova.network.os_vif_util [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.857 189463 DEBUG os_vif [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.858 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.859 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.860 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.865 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.865 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14dc4429-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.866 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14dc4429-05, col_values=(('external_ids', {'iface-id': '14dc4429-05ef-4ac6-9fa4-500c0ce93c01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:39:f2', 'vm-uuid': '839e5006-8465-4d21-8287-0bba4f28a358'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.868 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:26 compute-0 NetworkManager[56503]: <info>  [1764694286.8702] manager: (tap14dc4429-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.872 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.878 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.880 189463 INFO os_vif [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05')#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.944 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.944 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.944 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.945 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No VIF found with MAC fa:16:3e:de:39:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 16:51:26 compute-0 nova_compute[189459]: 2025-12-02 16:51:26.945 189463 INFO nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Using config drive#033[00m
Dec  2 16:51:27 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 16:51:26.831 189463 DEBUG nova.virt.libvirt.vif [None req-fc678c80-5b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 16:51:27 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 16:51:26.855 189463 DEBUG nova.virt.libvirt.vif [None req-fc678c80-5b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 16:51:27 compute-0 podman[240570]: 2025-12-02 16:51:27.233629868 +0000 UTC m=+0.055675747 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:51:27 compute-0 podman[240569]: 2025-12-02 16:51:27.263427524 +0000 UTC m=+0.087943099 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:51:27 compute-0 podman[240568]: 2025-12-02 16:51:27.296586349 +0000 UTC m=+0.118659749 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.461 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.524 189463 INFO nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Creating config drive at /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.config#033[00m
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.530 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9hs2z07 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.655 189463 DEBUG oslo_concurrency.processutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9hs2z07" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:51:27 compute-0 kernel: tap14dc4429-05: entered promiscuous mode
Dec  2 16:51:27 compute-0 NetworkManager[56503]: <info>  [1764694287.7365] manager: (tap14dc4429-05): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec  2 16:51:27 compute-0 ovn_controller[97975]: 2025-12-02T16:51:27Z|00035|binding|INFO|Claiming lport 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 for this chassis.
Dec  2 16:51:27 compute-0 ovn_controller[97975]: 2025-12-02T16:51:27Z|00036|binding|INFO|14dc4429-05ef-4ac6-9fa4-500c0ce93c01: Claiming fa:16:3e:de:39:f2 192.168.0.6
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.740 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.750 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:39:f2 192.168.0.6'], port_security=['fa:16:3e:de:39:f2 192.168.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-port-nmqrzuzw3ryx', 'neutron:cidrs': '192.168.0.6/24', 'neutron:device_id': '839e5006-8465-4d21-8287-0bba4f28a358', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-port-nmqrzuzw3ryx', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=14dc4429-05ef-4ac6-9fa4-500c0ce93c01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.753 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d bound to our chassis#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.755 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 16:51:27 compute-0 ovn_controller[97975]: 2025-12-02T16:51:27Z|00037|binding|INFO|Setting lport 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 ovn-installed in OVS
Dec  2 16:51:27 compute-0 ovn_controller[97975]: 2025-12-02T16:51:27Z|00038|binding|INFO|Setting lport 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 up in Southbound
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.760 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:27 compute-0 systemd-udevd[240655]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.774 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[20e97793-5996-4d61-99a2-6f9e0b11a91d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 systemd-machined[155878]: New machine qemu-2-instance-00000002.
Dec  2 16:51:27 compute-0 NetworkManager[56503]: <info>  [1764694287.7945] device (tap14dc4429-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 16:51:27 compute-0 NetworkManager[56503]: <info>  [1764694287.7956] device (tap14dc4429-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 16:51:27 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.822 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d56759-5a9f-4337-8dd5-8b64f8d02b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.826 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf2526a-a741-4782-9401-88d65661fa16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.866 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[b24808c4-4f63-49f6-ba4f-a44a6b4b65db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.885 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[19217d5f-d90b-44fe-b6bb-bdd7bccd9294]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 40853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240668, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.901 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[08e9ac7a-a6f5-4fe9-b9a3-654271f9d8a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240670, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240670, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.903 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.904 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:27 compute-0 nova_compute[189459]: 2025-12-02 16:51:27.906 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.906 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.907 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.907 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:51:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:51:27.907 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.014 189463 DEBUG nova.compute.manager [req-52c09adb-69bd-4f7b-b28e-a12d5b5b9d2b req-dc17dafd-1b0a-4002-90f0-e47792f9e12b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.016 189463 DEBUG oslo_concurrency.lockutils [req-52c09adb-69bd-4f7b-b28e-a12d5b5b9d2b req-dc17dafd-1b0a-4002-90f0-e47792f9e12b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.017 189463 DEBUG oslo_concurrency.lockutils [req-52c09adb-69bd-4f7b-b28e-a12d5b5b9d2b req-dc17dafd-1b0a-4002-90f0-e47792f9e12b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.017 189463 DEBUG oslo_concurrency.lockutils [req-52c09adb-69bd-4f7b-b28e-a12d5b5b9d2b req-dc17dafd-1b0a-4002-90f0-e47792f9e12b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.019 189463 DEBUG nova.compute.manager [req-52c09adb-69bd-4f7b-b28e-a12d5b5b9d2b req-dc17dafd-1b0a-4002-90f0-e47792f9e12b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Processing event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.098 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.099 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694288.0994637, 839e5006-8465-4d21-8287-0bba4f28a358 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.100 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] VM Started (Lifecycle Event)#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.103 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.108 189463 INFO nova.virt.libvirt.driver [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance spawned successfully.#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.108 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.127 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.137 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.146 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.146 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.147 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.147 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.148 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.148 189463 DEBUG nova.virt.libvirt.driver [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.173 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.174 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694288.0995796, 839e5006-8465-4d21-8287-0bba4f28a358 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.174 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] VM Paused (Lifecycle Event)#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.211 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.218 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694288.1025007, 839e5006-8465-4d21-8287-0bba4f28a358 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.218 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] VM Resumed (Lifecycle Event)#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.224 189463 INFO nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Took 3.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.225 189463 DEBUG nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.236 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.241 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.262 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.289 189463 INFO nova.compute.manager [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Took 4.31 seconds to build instance.#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.313 189463 DEBUG oslo_concurrency.lockutils [None req-fc678c80-5b53-42cb-bddd-7efb68b2dddc 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.549 189463 DEBUG nova.network.neutron [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updated VIF entry in instance network info cache for port 14dc4429-05ef-4ac6-9fa4-500c0ce93c01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.550 189463 DEBUG nova.network.neutron [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:51:28 compute-0 nova_compute[189459]: 2025-12-02 16:51:28.567 189463 DEBUG oslo_concurrency.lockutils [req-d88f2f91-c3ca-4576-9977-3f1248d4f902 req-3c076e42-6e38-46e0-b553-af3a68388763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:51:29 compute-0 podman[203941]: time="2025-12-02T16:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:51:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:51:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4765 "" "Go-http-client/1.1"
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.208 189463 DEBUG nova.compute.manager [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.209 189463 DEBUG oslo_concurrency.lockutils [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.209 189463 DEBUG oslo_concurrency.lockutils [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.209 189463 DEBUG oslo_concurrency.lockutils [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.209 189463 DEBUG nova.compute.manager [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] No waiting events found dispatching network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 16:51:30 compute-0 nova_compute[189459]: 2025-12-02 16:51:30.209 189463 WARNING nova.compute.manager [req-3b83c129-29d0-479e-b0b1-60fce8e4efe5 req-5283a82d-bfdf-4207-8f80-455e61fdd1cd b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received unexpected event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 for instance with vm_state active and task_state None.#033[00m
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: ERROR   16:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:51:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:51:31 compute-0 nova_compute[189459]: 2025-12-02 16:51:31.869 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:32 compute-0 nova_compute[189459]: 2025-12-02 16:51:32.466 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:36 compute-0 nova_compute[189459]: 2025-12-02 16:51:36.873 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:37 compute-0 nova_compute[189459]: 2025-12-02 16:51:37.469 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:40 compute-0 podman[240679]: 2025-12-02 16:51:40.304685078 +0000 UTC m=+0.115628148 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64)
Dec  2 16:51:41 compute-0 nova_compute[189459]: 2025-12-02 16:51:41.877 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:42 compute-0 nova_compute[189459]: 2025-12-02 16:51:42.470 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:46 compute-0 nova_compute[189459]: 2025-12-02 16:51:46.880 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:47 compute-0 podman[240700]: 2025-12-02 16:51:47.255432888 +0000 UTC m=+0.084114986 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  2 16:51:47 compute-0 podman[240701]: 2025-12-02 16:51:47.267127621 +0000 UTC m=+0.089267854 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:51:47 compute-0 nova_compute[189459]: 2025-12-02 16:51:47.474 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:49 compute-0 podman[240738]: 2025-12-02 16:51:49.270769507 +0000 UTC m=+0.087923128 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:51:51 compute-0 podman[240760]: 2025-12-02 16:51:51.261726545 +0000 UTC m=+0.082962556 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 16:51:51 compute-0 podman[240759]: 2025-12-02 16:51:51.276258633 +0000 UTC m=+0.101539792 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, maintainer=Red Hat, Inc., container_name=kepler)
Dec  2 16:51:51 compute-0 nova_compute[189459]: 2025-12-02 16:51:51.884 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:52 compute-0 nova_compute[189459]: 2025-12-02 16:51:52.477 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:56 compute-0 nova_compute[189459]: 2025-12-02 16:51:56.887 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:57 compute-0 nova_compute[189459]: 2025-12-02 16:51:57.480 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:51:57 compute-0 ovn_controller[97975]: 2025-12-02T16:51:57Z|00039|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  2 16:51:58 compute-0 podman[240799]: 2025-12-02 16:51:58.253041277 +0000 UTC m=+0.078855216 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 16:51:58 compute-0 podman[240800]: 2025-12-02 16:51:58.275709612 +0000 UTC m=+0.099050415 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:51:58 compute-0 podman[240798]: 2025-12-02 16:51:58.279011621 +0000 UTC m=+0.112620868 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:51:59 compute-0 podman[203941]: time="2025-12-02T16:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:51:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:51:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4755 "" "Go-http-client/1.1"
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: ERROR   16:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:52:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:52:01 compute-0 ovn_controller[97975]: 2025-12-02T16:52:01Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:39:f2 192.168.0.6
Dec  2 16:52:01 compute-0 ovn_controller[97975]: 2025-12-02T16:52:01Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:39:f2 192.168.0.6
Dec  2 16:52:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:52:01.858 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:52:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:52:01.859 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:52:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:52:01.859 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:52:01 compute-0 nova_compute[189459]: 2025-12-02 16:52:01.890 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:02 compute-0 nova_compute[189459]: 2025-12-02 16:52:02.483 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:06 compute-0 nova_compute[189459]: 2025-12-02 16:52:06.894 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:07 compute-0 nova_compute[189459]: 2025-12-02 16:52:07.485 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:09 compute-0 nova_compute[189459]: 2025-12-02 16:52:09.497 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:09 compute-0 nova_compute[189459]: 2025-12-02 16:52:09.497 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:10 compute-0 nova_compute[189459]: 2025-12-02 16:52:10.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:10 compute-0 nova_compute[189459]: 2025-12-02 16:52:10.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:52:10 compute-0 nova_compute[189459]: 2025-12-02 16:52:10.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:52:11 compute-0 podman[240874]: 2025-12-02 16:52:11.266095237 +0000 UTC m=+0.088455732 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  2 16:52:11 compute-0 nova_compute[189459]: 2025-12-02 16:52:11.531 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:52:11 compute-0 nova_compute[189459]: 2025-12-02 16:52:11.531 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:52:11 compute-0 nova_compute[189459]: 2025-12-02 16:52:11.532 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:52:11 compute-0 nova_compute[189459]: 2025-12-02 16:52:11.533 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:52:11 compute-0 nova_compute[189459]: 2025-12-02 16:52:11.897 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:12 compute-0 nova_compute[189459]: 2025-12-02 16:52:12.486 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.930 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.948 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.948 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.948 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.949 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.975 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.975 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.976 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:52:14 compute-0 nova_compute[189459]: 2025-12-02 16:52:14.976 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.065 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.140 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.142 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.219 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.222 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.284 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.285 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.351 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.360 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.422 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.423 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.481 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.482 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.545 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.547 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.641 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.994 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.996 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.17975616455078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.996 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:52:15 compute-0 nova_compute[189459]: 2025-12-02 16:52:15.997 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.136 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.136 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.137 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.138 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.208 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.223 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.258 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.259 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:52:16 compute-0 nova_compute[189459]: 2025-12-02 16:52:16.901 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:17 compute-0 nova_compute[189459]: 2025-12-02 16:52:17.490 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:18 compute-0 nova_compute[189459]: 2025-12-02 16:52:18.255 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:52:18 compute-0 podman[240921]: 2025-12-02 16:52:18.258700012 +0000 UTC m=+0.082054492 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  2 16:52:18 compute-0 podman[240920]: 2025-12-02 16:52:18.275287274 +0000 UTC m=+0.101132510 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 16:52:20 compute-0 podman[240963]: 2025-12-02 16:52:20.248122109 +0000 UTC m=+0.081325656 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 16:52:21 compute-0 nova_compute[189459]: 2025-12-02 16:52:21.904 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:22 compute-0 podman[240984]: 2025-12-02 16:52:22.243492331 +0000 UTC m=+0.058777786 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  2 16:52:22 compute-0 podman[240983]: 2025-12-02 16:52:22.294458458 +0000 UTC m=+0.110348619 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 16:52:22 compute-0 nova_compute[189459]: 2025-12-02 16:52:22.492 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:26 compute-0 nova_compute[189459]: 2025-12-02 16:52:26.907 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:27 compute-0 nova_compute[189459]: 2025-12-02 16:52:27.495 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:29 compute-0 podman[241022]: 2025-12-02 16:52:29.267890177 +0000 UTC m=+0.078448000 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:52:29 compute-0 podman[241023]: 2025-12-02 16:52:29.340718986 +0000 UTC m=+0.131281877 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:52:29 compute-0 podman[241021]: 2025-12-02 16:52:29.347272771 +0000 UTC m=+0.153490048 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 16:52:29 compute-0 podman[203941]: time="2025-12-02T16:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:52:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:52:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: ERROR   16:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:52:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:52:31 compute-0 nova_compute[189459]: 2025-12-02 16:52:31.910 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:32 compute-0 nova_compute[189459]: 2025-12-02 16:52:32.499 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:36 compute-0 nova_compute[189459]: 2025-12-02 16:52:36.915 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:37 compute-0 nova_compute[189459]: 2025-12-02 16:52:37.501 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:41 compute-0 nova_compute[189459]: 2025-12-02 16:52:41.918 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:42 compute-0 podman[241098]: 2025-12-02 16:52:42.243779453 +0000 UTC m=+0.080037013 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 16:52:42 compute-0 nova_compute[189459]: 2025-12-02 16:52:42.504 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:46 compute-0 nova_compute[189459]: 2025-12-02 16:52:46.921 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:47 compute-0 nova_compute[189459]: 2025-12-02 16:52:47.506 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:49 compute-0 podman[241120]: 2025-12-02 16:52:49.261176724 +0000 UTC m=+0.080604307 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:52:49 compute-0 podman[241119]: 2025-12-02 16:52:49.302435113 +0000 UTC m=+0.123944012 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  2 16:52:51 compute-0 podman[241157]: 2025-12-02 16:52:51.256700881 +0000 UTC m=+0.087845480 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 16:52:51 compute-0 nova_compute[189459]: 2025-12-02 16:52:51.925 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:52 compute-0 nova_compute[189459]: 2025-12-02 16:52:52.510 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:53 compute-0 podman[241177]: 2025-12-02 16:52:53.298151659 +0000 UTC m=+0.118154807 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:52:53 compute-0 podman[241178]: 2025-12-02 16:52:53.317457833 +0000 UTC m=+0.142526486 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 16:52:56 compute-0 nova_compute[189459]: 2025-12-02 16:52:56.931 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:57 compute-0 nova_compute[189459]: 2025-12-02 16:52:57.514 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:52:59 compute-0 podman[203941]: time="2025-12-02T16:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:52:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:52:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec  2 16:53:00 compute-0 podman[241214]: 2025-12-02 16:53:00.275845488 +0000 UTC m=+0.086433082 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 16:53:00 compute-0 podman[241213]: 2025-12-02 16:53:00.287030496 +0000 UTC m=+0.099013917 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:53:00 compute-0 podman[241212]: 2025-12-02 16:53:00.322122681 +0000 UTC m=+0.137120353 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: ERROR   16:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:53:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:53:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:53:01.859 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:53:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:53:01.860 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:53:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:53:01.860 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:53:01 compute-0 nova_compute[189459]: 2025-12-02 16:53:01.934 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:02 compute-0 nova_compute[189459]: 2025-12-02 16:53:02.519 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.047 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.048 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.055 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.059 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 839e5006-8465-4d21-8287-0bba4f28a358 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.060 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/839e5006-8465-4d21-8287-0bba4f28a358 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.703 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1958 Content-Type: application/json Date: Tue, 02 Dec 2025 16:53:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ede49d38-fb3a-43b0-ad92-45fe581ddf60 x-openstack-request-id: req-ede49d38-fb3a-43b0-ad92-45fe581ddf60 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.703 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "839e5006-8465-4d21-8287-0bba4f28a358", "name": "vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d", "status": "ACTIVE", "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "user_id": "91c12bcb1ad14b95b1bdedf7527f1adf", "metadata": {"metering.server_group": "a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea"}, "hostId": "037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059", "image": {"id": "5b0e8045-c81c-486a-86d2-bf0e0fd17a5a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"}]}, "flavor": {"id": "8aba0aff-301c-4123-b0dc-aba3acd2a3ad", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8aba0aff-301c-4123-b0dc-aba3acd2a3ad"}]}, "created": "2025-12-02T16:51:22Z", "updated": "2025-12-02T16:51:28Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:39:f2"}, {"version": 4, "addr": "192.168.122.222", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:39:f2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/839e5006-8465-4d21-8287-0bba4f28a358"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/839e5006-8465-4d21-8287-0bba4f28a358"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T16:51:28.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.703 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/839e5006-8465-4d21-8287-0bba4f28a358 used request id req-ede49d38-fb3a-43b0-ad92-45fe581ddf60 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.704 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '839e5006-8465-4d21-8287-0bba4f28a358', 'name': 'vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T16:53:03.705109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.712 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.718 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 839e5006-8465-4d21-8287-0bba4f28a358 / tap14dc4429-05 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.719 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.720 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.721 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T16:53:03.721120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.722 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T16:53:03.724563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.765 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.765 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.766 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.797 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.797 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.798 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.799 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T16:53:03.800045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.834 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 34040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.863 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/cpu volume: 48060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T16:53:03.865163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.955 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.955 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:03.956 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.061 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.062 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.062 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.064 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.065 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.065 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T16:53:04.064877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.066 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.066 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 418871740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.067 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 75002437 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.067 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 69536833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T16:53:04.069406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.069 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.070 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.070 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.071 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.071 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.072 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.074 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.074 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.074 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.075 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.075 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.076 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T16:53:04.073794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.077 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.078 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T16:53:04.078317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.079 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.079 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.080 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.080 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.080 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.082 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.083 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.083 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.084 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.084 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.084 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T16:53:04.082744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.086 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.086 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.086 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.087 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 1347051115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.087 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 9551865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.087 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.089 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.089 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T16:53:04.086088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T16:53:04.088965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.090 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.091 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.091 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.091 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.092 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.092 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T16:53:04.090772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.093 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.094 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.094 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T16:53:04.093741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.095 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d>]
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T16:53:04.095472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.098 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.098 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T16:53:04.096731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T16:53:04.097960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T16:53:04.099651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.101 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T16:53:04.101109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.101 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.102 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T16:53:04.102779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.103 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T16:53:04.104459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.106 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.106 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T16:53:04.106023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.107 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.108 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.108 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d>]
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T16:53:04.107767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T16:53:04.109287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.110 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.111 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T16:53:04.110651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.112 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.112 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:53:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:53:04.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T16:53:04.112413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:53:06 compute-0 nova_compute[189459]: 2025-12-02 16:53:06.938 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:07 compute-0 nova_compute[189459]: 2025-12-02 16:53:07.522 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:08 compute-0 nova_compute[189459]: 2025-12-02 16:53:08.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:10 compute-0 nova_compute[189459]: 2025-12-02 16:53:10.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:10 compute-0 nova_compute[189459]: 2025-12-02 16:53:10.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:53:11 compute-0 nova_compute[189459]: 2025-12-02 16:53:11.557 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:53:11 compute-0 nova_compute[189459]: 2025-12-02 16:53:11.558 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:53:11 compute-0 nova_compute[189459]: 2025-12-02 16:53:11.558 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:53:11 compute-0 nova_compute[189459]: 2025-12-02 16:53:11.942 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:12 compute-0 nova_compute[189459]: 2025-12-02 16:53:12.525 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:13 compute-0 podman[241285]: 2025-12-02 16:53:13.277186542 +0000 UTC m=+0.109480525 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=)
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.324 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.340 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.340 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.341 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.341 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.342 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.342 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.342 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.343 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.343 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.369 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.371 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.371 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.372 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.439 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.510 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.512 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.596 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.599 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.671 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.673 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.775 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.787 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.862 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.864 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.958 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:15 compute-0 nova_compute[189459]: 2025-12-02 16:53:15.960 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.052 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.054 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.125 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.488 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.490 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5025MB free_disk=72.17973709106445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.490 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.491 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.593 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.594 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.595 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.595 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.661 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.793 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.796 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.796 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:53:16 compute-0 nova_compute[189459]: 2025-12-02 16:53:16.946 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:17 compute-0 nova_compute[189459]: 2025-12-02 16:53:17.527 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:17 compute-0 nova_compute[189459]: 2025-12-02 16:53:17.794 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:53:20 compute-0 podman[241330]: 2025-12-02 16:53:20.241804155 +0000 UTC m=+0.077224178 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 16:53:20 compute-0 podman[241331]: 2025-12-02 16:53:20.250573818 +0000 UTC m=+0.078827470 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:53:21 compute-0 nova_compute[189459]: 2025-12-02 16:53:21.950 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:22 compute-0 podman[241371]: 2025-12-02 16:53:22.301813247 +0000 UTC m=+0.119024240 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  2 16:53:22 compute-0 nova_compute[189459]: 2025-12-02 16:53:22.529 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:24 compute-0 podman[241392]: 2025-12-02 16:53:24.304974736 +0000 UTC m=+0.109039484 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 16:53:24 compute-0 podman[241391]: 2025-12-02 16:53:24.328808221 +0000 UTC m=+0.143251666 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, container_name=kepler, version=9.4, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Dec  2 16:53:26 compute-0 nova_compute[189459]: 2025-12-02 16:53:26.955 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:27 compute-0 nova_compute[189459]: 2025-12-02 16:53:27.532 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:29 compute-0 podman[203941]: time="2025-12-02T16:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:53:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:53:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4761 "" "Go-http-client/1.1"
Dec  2 16:53:31 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 16:53:31 compute-0 podman[241433]: 2025-12-02 16:53:31.200265691 +0000 UTC m=+0.105569262 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 16:53:31 compute-0 podman[241439]: 2025-12-02 16:53:31.201661588 +0000 UTC m=+0.104829402 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:53:31 compute-0 podman[241431]: 2025-12-02 16:53:31.238769586 +0000 UTC m=+0.157545045 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: ERROR   16:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:53:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:53:31 compute-0 nova_compute[189459]: 2025-12-02 16:53:31.959 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:32 compute-0 nova_compute[189459]: 2025-12-02 16:53:32.535 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:36 compute-0 nova_compute[189459]: 2025-12-02 16:53:36.964 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:37 compute-0 nova_compute[189459]: 2025-12-02 16:53:37.539 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:41 compute-0 nova_compute[189459]: 2025-12-02 16:53:41.968 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:42 compute-0 nova_compute[189459]: 2025-12-02 16:53:42.541 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:44 compute-0 podman[241504]: 2025-12-02 16:53:44.272342719 +0000 UTC m=+0.100254110 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 16:53:46 compute-0 nova_compute[189459]: 2025-12-02 16:53:46.972 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:47 compute-0 nova_compute[189459]: 2025-12-02 16:53:47.544 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:51 compute-0 podman[241526]: 2025-12-02 16:53:51.291816971 +0000 UTC m=+0.094418235 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 16:53:51 compute-0 podman[241525]: 2025-12-02 16:53:51.29853975 +0000 UTC m=+0.107571035 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  2 16:53:51 compute-0 nova_compute[189459]: 2025-12-02 16:53:51.976 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:52 compute-0 nova_compute[189459]: 2025-12-02 16:53:52.546 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:53 compute-0 podman[241561]: 2025-12-02 16:53:53.280444083 +0000 UTC m=+0.109602159 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:53:55 compute-0 podman[241579]: 2025-12-02 16:53:55.249182366 +0000 UTC m=+0.076380875 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 16:53:55 compute-0 podman[241578]: 2025-12-02 16:53:55.279053332 +0000 UTC m=+0.108754847 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Dec  2 16:53:56 compute-0 nova_compute[189459]: 2025-12-02 16:53:56.980 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:57 compute-0 nova_compute[189459]: 2025-12-02 16:53:57.548 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:53:59 compute-0 podman[203941]: time="2025-12-02T16:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:53:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:53:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4767 "" "Go-http-client/1.1"
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: ERROR   16:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: ERROR   16:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: ERROR   16:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:54:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:54:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:54:01.860 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:54:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:54:01.861 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:54:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:54:01.862 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:54:01 compute-0 nova_compute[189459]: 2025-12-02 16:54:01.983 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:02 compute-0 podman[241617]: 2025-12-02 16:54:02.25818163 +0000 UTC m=+0.073398665 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 16:54:02 compute-0 podman[241616]: 2025-12-02 16:54:02.294430376 +0000 UTC m=+0.100752854 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:54:02 compute-0 podman[241615]: 2025-12-02 16:54:02.348482765 +0000 UTC m=+0.172897455 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:54:02 compute-0 nova_compute[189459]: 2025-12-02 16:54:02.555 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:06 compute-0 nova_compute[189459]: 2025-12-02 16:54:06.988 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:07 compute-0 nova_compute[189459]: 2025-12-02 16:54:07.561 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:09 compute-0 nova_compute[189459]: 2025-12-02 16:54:09.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:11 compute-0 nova_compute[189459]: 2025-12-02 16:54:11.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:11 compute-0 nova_compute[189459]: 2025-12-02 16:54:11.992 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:12 compute-0 nova_compute[189459]: 2025-12-02 16:54:12.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:12 compute-0 nova_compute[189459]: 2025-12-02 16:54:12.436 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:12 compute-0 nova_compute[189459]: 2025-12-02 16:54:12.437 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:54:12 compute-0 nova_compute[189459]: 2025-12-02 16:54:12.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:54:12 compute-0 nova_compute[189459]: 2025-12-02 16:54:12.568 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:13 compute-0 nova_compute[189459]: 2025-12-02 16:54:13.627 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:54:13 compute-0 nova_compute[189459]: 2025-12-02 16:54:13.628 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:54:13 compute-0 nova_compute[189459]: 2025-12-02 16:54:13.629 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:54:13 compute-0 nova_compute[189459]: 2025-12-02 16:54:13.630 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:54:14 compute-0 podman[241686]: 2025-12-02 16:54:14.821939652 +0000 UTC m=+0.118946528 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.915 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.931 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.932 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.933 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.933 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.934 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.934 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.934 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.935 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.936 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.958 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.959 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.959 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:54:15 compute-0 nova_compute[189459]: 2025-12-02 16:54:15.960 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.041 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.101 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.102 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.161 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.162 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.218 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.220 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.291 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.298 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.368 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.369 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.453 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.454 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.535 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.536 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.596 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.994 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.996 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5045MB free_disk=72.17973709106445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.996 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.996 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:54:16 compute-0 nova_compute[189459]: 2025-12-02 16:54:16.998 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.076 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.076 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.077 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.077 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.147 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.161 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.163 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.163 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:54:17 compute-0 nova_compute[189459]: 2025-12-02 16:54:17.574 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:22 compute-0 nova_compute[189459]: 2025-12-02 16:54:22.002 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:22 compute-0 podman[241734]: 2025-12-02 16:54:22.260333249 +0000 UTC m=+0.073354524 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 16:54:22 compute-0 podman[241733]: 2025-12-02 16:54:22.297815547 +0000 UTC m=+0.111616733 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 16:54:22 compute-0 nova_compute[189459]: 2025-12-02 16:54:22.577 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:24 compute-0 podman[241766]: 2025-12-02 16:54:24.264231929 +0000 UTC m=+0.091116478 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 16:54:26 compute-0 podman[241787]: 2025-12-02 16:54:26.300271073 +0000 UTC m=+0.121069255 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Dec  2 16:54:26 compute-0 podman[241788]: 2025-12-02 16:54:26.306018066 +0000 UTC m=+0.129042657 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 16:54:27 compute-0 nova_compute[189459]: 2025-12-02 16:54:27.008 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:27 compute-0 nova_compute[189459]: 2025-12-02 16:54:27.582 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:29 compute-0 podman[203941]: time="2025-12-02T16:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:54:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:54:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4774 "" "Go-http-client/1.1"
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: ERROR   16:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: ERROR   16:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: ERROR   16:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:54:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:54:32 compute-0 nova_compute[189459]: 2025-12-02 16:54:32.013 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:32 compute-0 nova_compute[189459]: 2025-12-02 16:54:32.586 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:33 compute-0 podman[241825]: 2025-12-02 16:54:33.264605812 +0000 UTC m=+0.080407650 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:54:33 compute-0 podman[241831]: 2025-12-02 16:54:33.27881827 +0000 UTC m=+0.083108841 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:54:33 compute-0 podman[241824]: 2025-12-02 16:54:33.318032562 +0000 UTC m=+0.141331339 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 16:54:37 compute-0 nova_compute[189459]: 2025-12-02 16:54:37.016 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:37 compute-0 nova_compute[189459]: 2025-12-02 16:54:37.589 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:42 compute-0 nova_compute[189459]: 2025-12-02 16:54:42.020 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:42 compute-0 nova_compute[189459]: 2025-12-02 16:54:42.593 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:45 compute-0 podman[241893]: 2025-12-02 16:54:45.298264639 +0000 UTC m=+0.113418947 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:54:47 compute-0 nova_compute[189459]: 2025-12-02 16:54:47.025 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:47 compute-0 nova_compute[189459]: 2025-12-02 16:54:47.597 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:52 compute-0 nova_compute[189459]: 2025-12-02 16:54:52.031 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:52 compute-0 nova_compute[189459]: 2025-12-02 16:54:52.600 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:53 compute-0 podman[241912]: 2025-12-02 16:54:53.315576114 +0000 UTC m=+0.128707323 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 16:54:53 compute-0 podman[241913]: 2025-12-02 16:54:53.322300473 +0000 UTC m=+0.130059379 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:54:55 compute-0 podman[241948]: 2025-12-02 16:54:55.283199197 +0000 UTC m=+0.114442775 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 16:54:57 compute-0 nova_compute[189459]: 2025-12-02 16:54:57.037 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:57 compute-0 podman[241970]: 2025-12-02 16:54:57.272241579 +0000 UTC m=+0.084681653 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 16:54:57 compute-0 podman[241969]: 2025-12-02 16:54:57.299880493 +0000 UTC m=+0.114412783 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, distribution-scope=public)
Dec  2 16:54:57 compute-0 nova_compute[189459]: 2025-12-02 16:54:57.604 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:54:59 compute-0 podman[203941]: time="2025-12-02T16:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:54:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:54:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4765 "" "Go-http-client/1.1"
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: ERROR   16:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: ERROR   16:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: ERROR   16:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:55:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:55:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:01.862 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:01.863 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:01.864 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:02 compute-0 nova_compute[189459]: 2025-12-02 16:55:02.041 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:02 compute-0 nova_compute[189459]: 2025-12-02 16:55:02.606 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.048 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.049 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.073 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.077 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '839e5006-8465-4d21-8287-0bba4f28a358', 'name': 'vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T16:55:03.078288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.084 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.090 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.091 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.092 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T16:55:03.091672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T16:55:03.093985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.119 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.119 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.120 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.149 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.149 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.150 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.152 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T16:55:03.153062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.192 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 35810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.229 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/cpu volume: 167120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.231 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T16:55:03.231663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.317 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.318 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.318 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.425 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.426 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.426 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.429 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.429 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.430 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.430 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 418871740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.431 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 75002437 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.431 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 69536833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T16:55:03.428769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.433 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.433 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T16:55:03.433634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.434 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.434 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.435 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.435 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.436 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.438 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.438 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.438 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.439 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.439 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.439 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.441 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.441 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.441 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.441 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.442 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.442 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.443 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.444 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.444 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.444 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.445 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T16:55:03.437918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T16:55:03.440939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T16:55:03.443569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.447 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.447 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.447 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.447 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 1353675681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.448 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 9551865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.448 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T16:55:03.446797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.449 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.450 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.450 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.451 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T16:55:03.450075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.452 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.452 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.452 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.453 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.453 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.453 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.454 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.455 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.455 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T16:55:03.451992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T16:55:03.455180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.458 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.461 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.461 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.463 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.463 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.464 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.465 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.466 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.467 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.468 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T16:55:03.457067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T16:55:03.458499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T16:55:03.459958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T16:55:03.461289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T16:55:03.462889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T16:55:03.464610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T16:55:03.465665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T16:55:03.466666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T16:55:03.467815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:55:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:55:03.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T16:55:03.469730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:55:04 compute-0 podman[242007]: 2025-12-02 16:55:04.253637547 +0000 UTC m=+0.083616525 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:55:04 compute-0 podman[242008]: 2025-12-02 16:55:04.283947543 +0000 UTC m=+0.096276272 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:55:04 compute-0 podman[242006]: 2025-12-02 16:55:04.350238385 +0000 UTC m=+0.172947750 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 16:55:05 compute-0 nova_compute[189459]: 2025-12-02 16:55:05.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:05 compute-0 nova_compute[189459]: 2025-12-02 16:55:05.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 16:55:07 compute-0 nova_compute[189459]: 2025-12-02 16:55:07.046 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:07 compute-0 nova_compute[189459]: 2025-12-02 16:55:07.609 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:09 compute-0 nova_compute[189459]: 2025-12-02 16:55:09.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:09 compute-0 nova_compute[189459]: 2025-12-02 16:55:09.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:11 compute-0 nova_compute[189459]: 2025-12-02 16:55:11.426 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.051 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.614 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.624 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.625 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:55:12 compute-0 nova_compute[189459]: 2025-12-02 16:55:12.626 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.662 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.679 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.680 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.681 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.682 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.683 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 16:55:13 compute-0 nova_compute[189459]: 2025-12-02 16:55:13.701 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.432 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.469 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.470 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.471 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.471 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.585 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.673 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.676 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.742 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.751 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.813 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.815 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.876 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.883 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.958 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:14 compute-0 nova_compute[189459]: 2025-12-02 16:55:14.961 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.023 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.031 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.088 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.089 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.149 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.548 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.549 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5038MB free_disk=72.17973709106445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.550 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.550 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.725 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.727 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.727 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.728 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.798 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.926 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.927 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.948 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 16:55:15 compute-0 nova_compute[189459]: 2025-12-02 16:55:15.972 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 16:55:16 compute-0 nova_compute[189459]: 2025-12-02 16:55:16.025 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:55:16 compute-0 nova_compute[189459]: 2025-12-02 16:55:16.041 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:55:16 compute-0 nova_compute[189459]: 2025-12-02 16:55:16.043 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:55:16 compute-0 nova_compute[189459]: 2025-12-02 16:55:16.043 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:16 compute-0 podman[242097]: 2025-12-02 16:55:16.388115535 +0000 UTC m=+0.111650350 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.023 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.023 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.053 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:55:17 compute-0 nova_compute[189459]: 2025-12-02 16:55:17.615 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:22 compute-0 nova_compute[189459]: 2025-12-02 16:55:22.057 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:22 compute-0 nova_compute[189459]: 2025-12-02 16:55:22.618 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:24 compute-0 podman[242120]: 2025-12-02 16:55:24.297166362 +0000 UTC m=+0.115898353 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  2 16:55:24 compute-0 podman[242119]: 2025-12-02 16:55:24.303617043 +0000 UTC m=+0.120432843 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec  2 16:55:26 compute-0 podman[242155]: 2025-12-02 16:55:26.334791226 +0000 UTC m=+0.151312015 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 16:55:27 compute-0 nova_compute[189459]: 2025-12-02 16:55:27.061 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:27 compute-0 nova_compute[189459]: 2025-12-02 16:55:27.624 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:28 compute-0 podman[242176]: 2025-12-02 16:55:28.305039479 +0000 UTC m=+0.110046048 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, version=9.4, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  2 16:55:28 compute-0 podman[242177]: 2025-12-02 16:55:28.326219322 +0000 UTC m=+0.134333663 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  2 16:55:29 compute-0 podman[203941]: time="2025-12-02T16:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:55:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:55:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4756 "" "Go-http-client/1.1"
Dec  2 16:55:31 compute-0 nova_compute[189459]: 2025-12-02 16:55:31.201 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:31 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:31.202 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:55:31 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:31.205 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: ERROR   16:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: ERROR   16:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: ERROR   16:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:55:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:55:32 compute-0 nova_compute[189459]: 2025-12-02 16:55:32.066 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:32 compute-0 nova_compute[189459]: 2025-12-02 16:55:32.626 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:35 compute-0 podman[242212]: 2025-12-02 16:55:35.27454192 +0000 UTC m=+0.090769954 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:55:35 compute-0 podman[242218]: 2025-12-02 16:55:35.297888311 +0000 UTC m=+0.107273413 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 16:55:35 compute-0 podman[242211]: 2025-12-02 16:55:35.312792568 +0000 UTC m=+0.139270145 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:55:37 compute-0 nova_compute[189459]: 2025-12-02 16:55:37.069 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:37 compute-0 nova_compute[189459]: 2025-12-02 16:55:37.629 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:38.208 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.487 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.487 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.549 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.713 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.713 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.721 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.722 189463 INFO nova.compute.claims [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.850 189463 DEBUG nova.compute.provider_tree [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.873 189463 DEBUG nova.scheduler.client.report [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.896 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.897 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.942 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.942 189463 DEBUG nova.network.neutron [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.960 189463 INFO nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 16:55:40 compute-0 nova_compute[189459]: 2025-12-02 16:55:40.990 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.098 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.100 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.101 189463 INFO nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Creating image(s)#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.101 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.102 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.103 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.117 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.171 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.172 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.173 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.185 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.248 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.250 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.297 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.299 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.300 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.378 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.380 189463 DEBUG nova.virt.disk.api [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking if we can resize image /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.382 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.439 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.440 189463 DEBUG nova.virt.disk.api [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Cannot resize image /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.441 189463 DEBUG nova.objects.instance [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'migration_context' on Instance uuid c3d793a6-79d5-4b91-ac80-9ac02a5d36ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.455 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.456 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.457 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.469 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.529 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.530 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.531 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.543 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.596 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.597 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.639 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.641 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.642 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.709 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.710 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.711 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Ensure instance console log exists: /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.711 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.712 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:41 compute-0 nova_compute[189459]: 2025-12-02 16:55:41.712 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:42 compute-0 nova_compute[189459]: 2025-12-02 16:55:42.073 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:42 compute-0 nova_compute[189459]: 2025-12-02 16:55:42.631 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.379 189463 DEBUG nova.network.neutron [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Successfully updated port: 2b3cee36-c20f-440c-8026-d43bec6b580a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.397 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.398 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquired lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.399 189463 DEBUG nova.network.neutron [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.500 189463 DEBUG nova.compute.manager [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-changed-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.501 189463 DEBUG nova.compute.manager [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Refreshing instance network info cache due to event network-changed-2b3cee36-c20f-440c-8026-d43bec6b580a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.502 189463 DEBUG oslo_concurrency.lockutils [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:55:45 compute-0 nova_compute[189459]: 2025-12-02 16:55:45.583 189463 DEBUG nova.network.neutron [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 16:55:46 compute-0 nova_compute[189459]: 2025-12-02 16:55:46.972 189463 DEBUG nova.network.neutron [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updating instance_info_cache with network_info: [{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:55:46 compute-0 nova_compute[189459]: 2025-12-02 16:55:46.997 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Releasing lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:55:46 compute-0 nova_compute[189459]: 2025-12-02 16:55:46.998 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance network_info: |[{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.000 189463 DEBUG oslo_concurrency.lockutils [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.001 189463 DEBUG nova.network.neutron [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Refreshing network info cache for port 2b3cee36-c20f-440c-8026-d43bec6b580a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.006 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Start _get_guest_xml network_info=[{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}], 'ephemerals': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 1, 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.019 189463 WARNING nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.030 189463 DEBUG nova.virt.libvirt.host [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.032 189463 DEBUG nova.virt.libvirt.host [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.038 189463 DEBUG nova.virt.libvirt.host [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.040 189463 DEBUG nova.virt.libvirt.host [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.041 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.043 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T16:48:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8aba0aff-301c-4123-b0dc-aba3acd2a3ad',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.044 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.045 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.047 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.048 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.049 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.050 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.051 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.052 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.053 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.054 189463 DEBUG nova.virt.hardware [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.060 189463 DEBUG nova.virt.libvirt.vif [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',id=3,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-2wdfa0ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:55:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTU5MzU2ODA5NTcwNTkyMTA4OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  2 16:55:47 compute-0 nova_compute[189459]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTU5MzU2ODA5NTcwNTkyMTA4OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=c3d793a6-79d5-4b91-ac80-9ac02a5d36ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.061 189463 DEBUG nova.network.os_vif_util [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.063 189463 DEBUG nova.network.os_vif_util [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.064 189463 DEBUG nova.objects.instance [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3d793a6-79d5-4b91-ac80-9ac02a5d36ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.077 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] End _get_guest_xml xml=<domain type="kvm">
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <uuid>c3d793a6-79d5-4b91-ac80-9ac02a5d36ce</uuid>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <name>instance-00000003</name>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <memory>524288</memory>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <metadata>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:name>vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm</nova:name>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 16:55:47</nova:creationTime>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:flavor name="m1.small">
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:memory>512</nova:memory>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:ephemeral>1</nova:ephemeral>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:user uuid="91c12bcb1ad14b95b1bdedf7527f1adf">admin</nova:user>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:project uuid="2f96d47197fa40f2a7126bf626847d74">admin</nova:project>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        <nova:port uuid="2b3cee36-c20f-440c-8026-d43bec6b580a">
Dec  2 16:55:47 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="192.168.0.244" ipVersion="4"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </metadata>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <system>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="serial">c3d793a6-79d5-4b91-ac80-9ac02a5d36ce</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="uuid">c3d793a6-79d5-4b91-ac80-9ac02a5d36ce</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </system>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <os>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </os>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <features>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <apic/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </features>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </clock>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </cpu>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  <devices>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <target dev="vdb" bus="virtio"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.config"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:1b:65:a3"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <target dev="tap2b3cee36-c2"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </interface>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/console.log" append="off"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </serial>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <video>
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </video>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </rng>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 16:55:47 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 16:55:47 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 16:55:47 compute-0 nova_compute[189459]:  </devices>
Dec  2 16:55:47 compute-0 nova_compute[189459]: </domain>
Dec  2 16:55:47 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.088 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Preparing to wait for external event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.088 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.088 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.088 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.089 189463 DEBUG nova.virt.libvirt.vif [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',id=3,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-2wdfa0ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:55:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTU5MzU2ODA5NTcwNTkyMTA4OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.089 189463 DEBUG nova.network.os_vif_util [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.090 189463 DEBUG nova.network.os_vif_util [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.090 189463 DEBUG os_vif [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.091 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.093 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.094 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.094 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.099 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.099 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b3cee36-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.100 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b3cee36-c2, col_values=(('external_ids', {'iface-id': '2b3cee36-c20f-440c-8026-d43bec6b580a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:65:a3', 'vm-uuid': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.102 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.104 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 16:55:47 compute-0 NetworkManager[56503]: <info>  [1764694547.1053] manager: (tap2b3cee36-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.117 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.119 189463 INFO os_vif [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2')#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.227 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.228 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.229 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.229 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No VIF found with MAC fa:16:3e:1b:65:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.230 189463 INFO nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Using config drive#033[00m
Dec  2 16:55:47 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 16:55:47.060 189463 DEBUG nova.virt.libvirt.vif [None req-a83db659-77 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 16:55:47 compute-0 podman[242311]: 2025-12-02 16:55:47.279413621 +0000 UTC m=+0.112125522 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.640 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.750 189463 INFO nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Creating config drive at /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.config#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.763 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa6760cw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:55:47 compute-0 nova_compute[189459]: 2025-12-02 16:55:47.895 189463 DEBUG oslo_concurrency.processutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa6760cw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:55:48 compute-0 kernel: tap2b3cee36-c2: entered promiscuous mode
Dec  2 16:55:48 compute-0 NetworkManager[56503]: <info>  [1764694548.0163] manager: (tap2b3cee36-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  2 16:55:48 compute-0 ovn_controller[97975]: 2025-12-02T16:55:48Z|00040|binding|INFO|Claiming lport 2b3cee36-c20f-440c-8026-d43bec6b580a for this chassis.
Dec  2 16:55:48 compute-0 ovn_controller[97975]: 2025-12-02T16:55:48Z|00041|binding|INFO|2b3cee36-c20f-440c-8026-d43bec6b580a: Claiming fa:16:3e:1b:65:a3 192.168.0.244
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.020 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.026 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:65:a3 192.168.0.244'], port_security=['fa:16:3e:1b:65:a3 192.168.0.244'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-port-etjiathcc44u', 'neutron:cidrs': '192.168.0.244/24', 'neutron:device_id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-port-etjiathcc44u', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=2b3cee36-c20f-440c-8026-d43bec6b580a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.028 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 2b3cee36-c20f-440c-8026-d43bec6b580a in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d bound to our chassis#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.029 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 16:55:48 compute-0 ovn_controller[97975]: 2025-12-02T16:55:48Z|00042|binding|INFO|Setting lport 2b3cee36-c20f-440c-8026-d43bec6b580a ovn-installed in OVS
Dec  2 16:55:48 compute-0 ovn_controller[97975]: 2025-12-02T16:55:48Z|00043|binding|INFO|Setting lport 2b3cee36-c20f-440c-8026-d43bec6b580a up in Southbound
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.042 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.050 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.049 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0f153253-aaf6-4ab3-8390-6fee81a0b584]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 systemd-machined[155878]: New machine qemu-3-instance-00000003.
Dec  2 16:55:48 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.096 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2cde01-78c5-4a1b-b8fd-e9773b4cf610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.101 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[23c8ec12-8136-44d8-9901-d6d5f0e98b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 systemd-udevd[242355]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 16:55:48 compute-0 NetworkManager[56503]: <info>  [1764694548.1480] device (tap2b3cee36-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 16:55:48 compute-0 NetworkManager[56503]: <info>  [1764694548.1486] device (tap2b3cee36-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.146 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[3e38b02b-9e8d-428d-a37b-185e741f4d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.169 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7f79c25c-4f7e-4a62-b2c4-8a6b1f1e07fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 29479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242361, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.186 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba2ff76-aabe-4cbe-ac0b-a573f00d3d68]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242364, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242364, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.188 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.190 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.192 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.192 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.193 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.193 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:55:48 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:55:48.194 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.456 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694548.4558203, c3d793a6-79d5-4b91-ac80-9ac02a5d36ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.457 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] VM Started (Lifecycle Event)#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.497 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.502 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694548.4560213, c3d793a6-79d5-4b91-ac80-9ac02a5d36ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.503 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] VM Paused (Lifecycle Event)#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.517 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.522 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.538 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.781 189463 DEBUG nova.compute.manager [req-6cd489e2-9d2e-420e-8cf8-849874f7b848 req-7a9cbb13-99a9-42b8-95bb-500748e6b0c3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.782 189463 DEBUG oslo_concurrency.lockutils [req-6cd489e2-9d2e-420e-8cf8-849874f7b848 req-7a9cbb13-99a9-42b8-95bb-500748e6b0c3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.782 189463 DEBUG oslo_concurrency.lockutils [req-6cd489e2-9d2e-420e-8cf8-849874f7b848 req-7a9cbb13-99a9-42b8-95bb-500748e6b0c3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.782 189463 DEBUG oslo_concurrency.lockutils [req-6cd489e2-9d2e-420e-8cf8-849874f7b848 req-7a9cbb13-99a9-42b8-95bb-500748e6b0c3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.783 189463 DEBUG nova.compute.manager [req-6cd489e2-9d2e-420e-8cf8-849874f7b848 req-7a9cbb13-99a9-42b8-95bb-500748e6b0c3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Processing event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.783 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.798 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694548.798286, c3d793a6-79d5-4b91-ac80-9ac02a5d36ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.799 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] VM Resumed (Lifecycle Event)#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.801 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.807 189463 INFO nova.virt.libvirt.driver [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance spawned successfully.#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.807 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.820 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.833 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.838 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.839 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.840 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.840 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.841 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.842 189463 DEBUG nova.virt.libvirt.driver [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.852 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.891 189463 INFO nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.892 189463 DEBUG nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.958 189463 INFO nova.compute.manager [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Took 8.27 seconds to build instance.#033[00m
Dec  2 16:55:48 compute-0 nova_compute[189459]: 2025-12-02 16:55:48.987 189463 DEBUG oslo_concurrency.lockutils [None req-a83db659-7756-4dcc-949f-b2dd8bd01079 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  2 16:55:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  2 16:55:49 compute-0 nova_compute[189459]: 2025-12-02 16:55:49.094 189463 DEBUG nova.network.neutron [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updated VIF entry in instance network info cache for port 2b3cee36-c20f-440c-8026-d43bec6b580a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 16:55:49 compute-0 nova_compute[189459]: 2025-12-02 16:55:49.096 189463 DEBUG nova.network.neutron [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updating instance_info_cache with network_info: [{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:55:49 compute-0 nova_compute[189459]: 2025-12-02 16:55:49.114 189463 DEBUG oslo_concurrency.lockutils [req-755540a3-3eb7-4fc4-8030-37010f9f496a req-53bb02ca-b59f-4f69-88db-332f24ec10ae b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.863 189463 DEBUG nova.compute.manager [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.865 189463 DEBUG oslo_concurrency.lockutils [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.866 189463 DEBUG oslo_concurrency.lockutils [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.867 189463 DEBUG oslo_concurrency.lockutils [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.868 189463 DEBUG nova.compute.manager [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] No waiting events found dispatching network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 16:55:50 compute-0 nova_compute[189459]: 2025-12-02 16:55:50.869 189463 WARNING nova.compute.manager [req-e87d4e93-504b-41bb-9db6-f2d3c6a1f6e0 req-6ff17e78-5a76-4432-a281-f82166b85c41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received unexpected event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a for instance with vm_state active and task_state None.#033[00m
Dec  2 16:55:52 compute-0 nova_compute[189459]: 2025-12-02 16:55:52.103 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:52 compute-0 nova_compute[189459]: 2025-12-02 16:55:52.636 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:55 compute-0 podman[242394]: 2025-12-02 16:55:55.271387383 +0000 UTC m=+0.092359647 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  2 16:55:55 compute-0 podman[242393]: 2025-12-02 16:55:55.308782107 +0000 UTC m=+0.129770562 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 16:55:57 compute-0 nova_compute[189459]: 2025-12-02 16:55:57.108 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:57 compute-0 podman[242429]: 2025-12-02 16:55:57.288967664 +0000 UTC m=+0.110693834 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:55:57 compute-0 nova_compute[189459]: 2025-12-02 16:55:57.638 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:55:59 compute-0 podman[242451]: 2025-12-02 16:55:59.319791027 +0000 UTC m=+0.136919002 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  2 16:55:59 compute-0 podman[242450]: 2025-12-02 16:55:59.324988235 +0000 UTC m=+0.139489360 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container)
Dec  2 16:55:59 compute-0 podman[203941]: time="2025-12-02T16:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:55:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:55:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: ERROR   16:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: ERROR   16:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: ERROR   16:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:56:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:56:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:56:01.862 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:56:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:56:01.863 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:56:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:56:01.868 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:56:02 compute-0 nova_compute[189459]: 2025-12-02 16:56:02.113 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:02 compute-0 nova_compute[189459]: 2025-12-02 16:56:02.643 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:06 compute-0 podman[242486]: 2025-12-02 16:56:06.275496282 +0000 UTC m=+0.089011968 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 16:56:06 compute-0 podman[242485]: 2025-12-02 16:56:06.305963022 +0000 UTC m=+0.121799330 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:56:06 compute-0 podman[242484]: 2025-12-02 16:56:06.324696591 +0000 UTC m=+0.140101397 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:56:07 compute-0 nova_compute[189459]: 2025-12-02 16:56:07.116 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:07 compute-0 nova_compute[189459]: 2025-12-02 16:56:07.647 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:10 compute-0 nova_compute[189459]: 2025-12-02 16:56:10.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:12 compute-0 nova_compute[189459]: 2025-12-02 16:56:12.120 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:12 compute-0 nova_compute[189459]: 2025-12-02 16:56:12.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:12 compute-0 nova_compute[189459]: 2025-12-02 16:56:12.650 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:13 compute-0 nova_compute[189459]: 2025-12-02 16:56:13.404 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.661 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.662 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.663 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:56:14 compute-0 nova_compute[189459]: 2025-12-02 16:56:14.664 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.838 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.866 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.867 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.868 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.868 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.902 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.904 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.904 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:56:15 compute-0 nova_compute[189459]: 2025-12-02 16:56:15.904 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.013 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.112 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.115 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.181 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.184 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.252 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.253 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.317 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.326 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.387 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.388 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.448 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.450 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.527 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.528 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.620 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.627 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.725 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.727 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.782 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.783 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.839 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.841 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:56:16 compute-0 nova_compute[189459]: 2025-12-02 16:56:16.906 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.123 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.287 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.288 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=72.17879486083984GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.289 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.289 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.576 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.578 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.652 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.657 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.677 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.701 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:56:17 compute-0 nova_compute[189459]: 2025-12-02 16:56:17.701 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:56:18 compute-0 ovn_controller[97975]: 2025-12-02T16:56:18Z|00044|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  2 16:56:18 compute-0 nova_compute[189459]: 2025-12-02 16:56:18.243 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:18 compute-0 nova_compute[189459]: 2025-12-02 16:56:18.243 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:18 compute-0 nova_compute[189459]: 2025-12-02 16:56:18.244 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:18 compute-0 nova_compute[189459]: 2025-12-02 16:56:18.244 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:56:18 compute-0 nova_compute[189459]: 2025-12-02 16:56:18.244 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:56:18 compute-0 podman[242587]: 2025-12-02 16:56:18.31106163 +0000 UTC m=+0.133409248 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc.)
Dec  2 16:56:22 compute-0 nova_compute[189459]: 2025-12-02 16:56:22.126 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:22 compute-0 nova_compute[189459]: 2025-12-02 16:56:22.655 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:22 compute-0 ovn_controller[97975]: 2025-12-02T16:56:22Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:65:a3 192.168.0.244
Dec  2 16:56:22 compute-0 ovn_controller[97975]: 2025-12-02T16:56:22Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:65:a3 192.168.0.244
Dec  2 16:56:26 compute-0 podman[242625]: 2025-12-02 16:56:26.264872647 +0000 UTC m=+0.092552812 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec  2 16:56:26 compute-0 podman[242626]: 2025-12-02 16:56:26.26609415 +0000 UTC m=+0.084785796 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 16:56:27 compute-0 nova_compute[189459]: 2025-12-02 16:56:27.130 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:27 compute-0 nova_compute[189459]: 2025-12-02 16:56:27.664 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:28 compute-0 podman[242661]: 2025-12-02 16:56:28.282109999 +0000 UTC m=+0.102861066 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  2 16:56:29 compute-0 podman[203941]: time="2025-12-02T16:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:56:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:56:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  2 16:56:30 compute-0 podman[242680]: 2025-12-02 16:56:30.240471655 +0000 UTC m=+0.068076292 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 16:56:30 compute-0 podman[242679]: 2025-12-02 16:56:30.287116425 +0000 UTC m=+0.112877663 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, version=9.4, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm)
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: ERROR   16:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: ERROR   16:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: ERROR   16:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:56:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:56:32 compute-0 nova_compute[189459]: 2025-12-02 16:56:32.133 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:32 compute-0 nova_compute[189459]: 2025-12-02 16:56:32.662 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:37 compute-0 nova_compute[189459]: 2025-12-02 16:56:37.136 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:37 compute-0 podman[242721]: 2025-12-02 16:56:37.269671008 +0000 UTC m=+0.084074148 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 16:56:37 compute-0 podman[242720]: 2025-12-02 16:56:37.308345678 +0000 UTC m=+0.124287299 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 16:56:37 compute-0 podman[242719]: 2025-12-02 16:56:37.32230593 +0000 UTC m=+0.147161199 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:56:37 compute-0 nova_compute[189459]: 2025-12-02 16:56:37.666 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:42 compute-0 nova_compute[189459]: 2025-12-02 16:56:42.140 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:42 compute-0 nova_compute[189459]: 2025-12-02 16:56:42.672 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:47 compute-0 nova_compute[189459]: 2025-12-02 16:56:47.145 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:47 compute-0 nova_compute[189459]: 2025-12-02 16:56:47.679 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:49 compute-0 podman[242791]: 2025-12-02 16:56:49.301423178 +0000 UTC m=+0.120397137 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec  2 16:56:52 compute-0 nova_compute[189459]: 2025-12-02 16:56:52.151 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:52 compute-0 nova_compute[189459]: 2025-12-02 16:56:52.683 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:57 compute-0 nova_compute[189459]: 2025-12-02 16:56:57.154 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:57 compute-0 podman[242814]: 2025-12-02 16:56:57.270898628 +0000 UTC m=+0.093141811 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team)
Dec  2 16:56:57 compute-0 podman[242815]: 2025-12-02 16:56:57.293025738 +0000 UTC m=+0.109042685 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Dec  2 16:56:57 compute-0 nova_compute[189459]: 2025-12-02 16:56:57.686 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:56:59 compute-0 podman[242849]: 2025-12-02 16:56:59.325243651 +0000 UTC m=+0.138698615 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 16:56:59 compute-0 podman[203941]: time="2025-12-02T16:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:56:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:56:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  2 16:57:01 compute-0 podman[242871]: 2025-12-02 16:57:01.252928081 +0000 UTC m=+0.075267225 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 16:57:01 compute-0 podman[242870]: 2025-12-02 16:57:01.276212551 +0000 UTC m=+0.096442999 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, architecture=x86_64, release=1214.1726694543, version=9.4, config_id=edpm, release-0.7.12=, io.openshift.expose-services=, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: ERROR   16:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: ERROR   16:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: ERROR   16:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:57:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:57:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:01.863 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:01.863 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:01.864 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:02 compute-0 nova_compute[189459]: 2025-12-02 16:57:02.158 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:02 compute-0 nova_compute[189459]: 2025-12-02 16:57:02.692 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.049 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.049 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.071 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '839e5006-8465-4d21-8287-0bba4f28a358', 'name': 'vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.074 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.076 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.847 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 02 Dec 2025 16:57:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-36a52c65-4b66-409a-9e0e-80f50c9c4b7c x-openstack-request-id: req-36a52c65-4b66-409a-9e0e-80f50c9c4b7c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.847 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce", "name": "vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm", "status": "ACTIVE", "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "user_id": "91c12bcb1ad14b95b1bdedf7527f1adf", "metadata": {"metering.server_group": "a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea"}, "hostId": "037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059", "image": {"id": "5b0e8045-c81c-486a-86d2-bf0e0fd17a5a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"}]}, "flavor": {"id": "8aba0aff-301c-4123-b0dc-aba3acd2a3ad", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8aba0aff-301c-4123-b0dc-aba3acd2a3ad"}]}, "created": "2025-12-02T16:55:38Z", "updated": "2025-12-02T16:55:48Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.244", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1b:65:a3"}, {"version": 4, "addr": "192.168.122.227", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1b:65:a3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T16:55:48.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.847 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce used request id req-36a52c65-4b66-409a-9e0e-80f50c9c4b7c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.850 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'name': 'vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.850 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T16:57:03.851720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.860 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.867 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.873 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c3d793a6-79d5-4b91-ac80-9ac02a5d36ce / tap2b3cee36-c2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.874 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.875 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T16:57:03.876666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.877 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.877 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.878 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.879 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T16:57:03.880478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.911 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.912 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.913 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.952 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.952 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.953 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:03.999 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.000 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.001 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.003 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T16:57:04.004427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.030 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 37490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.068 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/cpu volume: 287420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.108 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/cpu volume: 33360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T16:57:04.111212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.209 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.209 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.209 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.309 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.309 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.309 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.412 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.412 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.414 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.414 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.414 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.414 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 418871740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T16:57:04.413927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.415 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 75002437 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.415 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 69536833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.415 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 412604943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.415 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 86706146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 66308231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.416 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.417 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.417 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.417 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.417 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.418 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.418 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.418 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.418 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T16:57:04.416857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T16:57:04.419553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.420 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.422 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.422 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.422 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.422 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.423 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.423 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.423 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.423 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.423 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T16:57:04.422041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T16:57:04.424914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.425 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.426 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.426 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.426 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.426 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.428 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.428 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.428 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.428 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 1353675681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.429 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 9551865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.429 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.429 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 1373521669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.430 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 12454002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.430 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T16:57:04.428026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.431 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T16:57:04.431504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.433 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.433 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.433 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.433 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.433 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.434 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.434 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.434 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.435 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.436 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm>]
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T16:57:04.432885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T16:57:04.435336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T16:57:04.436612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T16:57:04.437594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T16:57:04.438536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.440 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T16:57:04.439861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T16:57:04.441068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.442 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T16:57:04.442565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T16:57:04.443952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T16:57:04.445150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.445 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.446 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.447 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.447 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T16:57:04.446771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm>]
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T16:57:04.448235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T16:57:04.448977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T16:57:04.450280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:57:04.454 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:57:07 compute-0 nova_compute[189459]: 2025-12-02 16:57:07.162 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:07 compute-0 nova_compute[189459]: 2025-12-02 16:57:07.699 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:08 compute-0 podman[242912]: 2025-12-02 16:57:08.262760538 +0000 UTC m=+0.083332710 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 16:57:08 compute-0 podman[242913]: 2025-12-02 16:57:08.276617447 +0000 UTC m=+0.085294402 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 16:57:08 compute-0 podman[242911]: 2025-12-02 16:57:08.334976801 +0000 UTC m=+0.160129215 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  2 16:57:12 compute-0 nova_compute[189459]: 2025-12-02 16:57:12.166 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:12 compute-0 nova_compute[189459]: 2025-12-02 16:57:12.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:12 compute-0 nova_compute[189459]: 2025-12-02 16:57:12.701 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:14 compute-0 nova_compute[189459]: 2025-12-02 16:57:14.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:16 compute-0 nova_compute[189459]: 2025-12-02 16:57:16.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:16 compute-0 nova_compute[189459]: 2025-12-02 16:57:16.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:57:16 compute-0 nova_compute[189459]: 2025-12-02 16:57:16.729 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:57:16 compute-0 nova_compute[189459]: 2025-12-02 16:57:16.730 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:57:16 compute-0 nova_compute[189459]: 2025-12-02 16:57:16.731 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:57:17 compute-0 nova_compute[189459]: 2025-12-02 16:57:17.170 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:17 compute-0 nova_compute[189459]: 2025-12-02 16:57:17.704 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.388 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.403 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.404 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.408 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.430 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.431 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.431 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.432 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.532 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.592 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.594 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.692 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.693 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.771 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.773 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.853 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.864 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.930 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:18 compute-0 nova_compute[189459]: 2025-12-02 16:57:18.931 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.005 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.007 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.086 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.087 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.181 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.190 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.253 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.255 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.355 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.360 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.422 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.423 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.508 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.898 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.900 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4857MB free_disk=72.1576919555664GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.900 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.901 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.971 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.972 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.972 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.973 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:57:19 compute-0 nova_compute[189459]: 2025-12-02 16:57:19.974 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:57:20 compute-0 nova_compute[189459]: 2025-12-02 16:57:20.086 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:57:20 compute-0 nova_compute[189459]: 2025-12-02 16:57:20.107 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:57:20 compute-0 nova_compute[189459]: 2025-12-02 16:57:20.110 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:57:20 compute-0 nova_compute[189459]: 2025-12-02 16:57:20.110 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:20 compute-0 podman[243015]: 2025-12-02 16:57:20.2889823 +0000 UTC m=+0.107397721 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  2 16:57:21 compute-0 nova_compute[189459]: 2025-12-02 16:57:21.116 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:57:22 compute-0 nova_compute[189459]: 2025-12-02 16:57:22.175 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:22 compute-0 nova_compute[189459]: 2025-12-02 16:57:22.706 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:27 compute-0 nova_compute[189459]: 2025-12-02 16:57:27.181 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:27 compute-0 nova_compute[189459]: 2025-12-02 16:57:27.709 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:28 compute-0 podman[243036]: 2025-12-02 16:57:28.26257939 +0000 UTC m=+0.089313210 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  2 16:57:28 compute-0 podman[243037]: 2025-12-02 16:57:28.285236393 +0000 UTC m=+0.097535448 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  2 16:57:29 compute-0 podman[203941]: time="2025-12-02T16:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:57:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:57:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  2 16:57:30 compute-0 podman[243072]: 2025-12-02 16:57:30.260717786 +0000 UTC m=+0.089283688 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: ERROR   16:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: ERROR   16:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: ERROR   16:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: ERROR   16:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: ERROR   16:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:57:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:57:32 compute-0 nova_compute[189459]: 2025-12-02 16:57:32.185 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:32 compute-0 podman[243092]: 2025-12-02 16:57:32.257724052 +0000 UTC m=+0.077931006 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 16:57:32 compute-0 podman[243091]: 2025-12-02 16:57:32.267836301 +0000 UTC m=+0.092841653 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:57:32 compute-0 nova_compute[189459]: 2025-12-02 16:57:32.704 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:32 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:32.700 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:57:32 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:32.701 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 16:57:32 compute-0 nova_compute[189459]: 2025-12-02 16:57:32.712 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:37 compute-0 nova_compute[189459]: 2025-12-02 16:57:37.187 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:37 compute-0 nova_compute[189459]: 2025-12-02 16:57:37.718 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:39 compute-0 podman[243129]: 2025-12-02 16:57:39.272048258 +0000 UTC m=+0.087305566 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 16:57:39 compute-0 podman[243130]: 2025-12-02 16:57:39.274645537 +0000 UTC m=+0.081010568 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 16:57:39 compute-0 podman[243128]: 2025-12-02 16:57:39.313851011 +0000 UTC m=+0.136320911 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.299 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.301 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.357 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.481 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.482 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.489 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.490 189463 INFO nova.compute.claims [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.640 189463 DEBUG nova.compute.provider_tree [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.670 189463 DEBUG nova.scheduler.client.report [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.690 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.692 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.733 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.734 189463 DEBUG nova.network.neutron [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.754 189463 INFO nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.800 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.903 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.909 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.910 189463 INFO nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Creating image(s)#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.911 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.912 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.913 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.927 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.989 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.990 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:41 compute-0 nova_compute[189459]: 2025-12-02 16:57:41.991 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.003 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.061 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.063 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.105 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31,backing_fmt=raw /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.106 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "f75af7a5e837c1ca61378fc78133e18a40f43f31" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.107 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.176 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.187 189463 DEBUG nova.virt.disk.api [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking if we can resize image /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.189 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.204 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.251 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.252 189463 DEBUG nova.virt.disk.api [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Cannot resize image /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.253 189463 DEBUG nova.objects.instance [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'migration_context' on Instance uuid 941718a9-628f-4f41-81e3-225760dc6a62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.267 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.268 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.269 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.282 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.348 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.350 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.351 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.363 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.421 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.423 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.460 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.471 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.472 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.530 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.532 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.532 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Ensure instance console log exists: /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.533 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.534 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.534 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:42 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:42.703 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:42 compute-0 nova_compute[189459]: 2025-12-02 16:57:42.719 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.010 189463 DEBUG nova.network.neutron [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Successfully updated port: b511e990-3b17-4177-96a7-40fc44f7937a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.027 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.027 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.028 189463 DEBUG nova.network.neutron [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.147 189463 DEBUG nova.compute.manager [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-changed-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.148 189463 DEBUG nova.compute.manager [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Refreshing instance network info cache due to event network-changed-b511e990-3b17-4177-96a7-40fc44f7937a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.148 189463 DEBUG oslo_concurrency.lockutils [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.233 189463 DEBUG nova.network.neutron [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.871 189463 DEBUG nova.network.neutron [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.899 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.900 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Instance network_info: |[{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.900 189463 DEBUG oslo_concurrency.lockutils [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.900 189463 DEBUG nova.network.neutron [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Refreshing network info cache for port b511e990-3b17-4177-96a7-40fc44f7937a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.903 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Start _get_guest_xml network_info=[{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}], 'ephemerals': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 1, 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.912 189463 WARNING nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.922 189463 DEBUG nova.virt.libvirt.host [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.923 189463 DEBUG nova.virt.libvirt.host [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.929 189463 DEBUG nova.virt.libvirt.host [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.930 189463 DEBUG nova.virt.libvirt.host [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.930 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.930 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T16:48:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8aba0aff-301c-4123-b0dc-aba3acd2a3ad',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T16:48:47Z,direct_url=<?>,disk_format='qcow2',id=5b0e8045-c81c-486a-86d2-bf0e0fd17a5a,min_disk=0,min_ram=0,name='cirros',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T16:48:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.931 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.931 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.931 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.931 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.932 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.932 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.932 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.933 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.933 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.933 189463 DEBUG nova.virt.hardware [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.937 189463 DEBUG nova.virt.libvirt.vif [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',id=4,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-lluuumkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:57:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  2 16:57:43 compute-0 nova_compute[189459]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=941718a9-628f-4f41-81e3-225760dc6a62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.938 189463 DEBUG nova.network.os_vif_util [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.939 189463 DEBUG nova.network.os_vif_util [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.939 189463 DEBUG nova.objects.instance [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 941718a9-628f-4f41-81e3-225760dc6a62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.953 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] End _get_guest_xml xml=<domain type="kvm">
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <uuid>941718a9-628f-4f41-81e3-225760dc6a62</uuid>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <name>instance-00000004</name>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <memory>524288</memory>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <metadata>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:name>vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz</nova:name>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 16:57:43</nova:creationTime>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:flavor name="m1.small">
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:memory>512</nova:memory>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:ephemeral>1</nova:ephemeral>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:user uuid="91c12bcb1ad14b95b1bdedf7527f1adf">admin</nova:user>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:project uuid="2f96d47197fa40f2a7126bf626847d74">admin</nova:project>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        <nova:port uuid="b511e990-3b17-4177-96a7-40fc44f7937a">
Dec  2 16:57:43 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="192.168.0.90" ipVersion="4"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </metadata>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <system>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="serial">941718a9-628f-4f41-81e3-225760dc6a62</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="uuid">941718a9-628f-4f41-81e3-225760dc6a62</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </system>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <os>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </os>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <features>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <apic/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </features>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </clock>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </cpu>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  <devices>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <target dev="vdb" bus="virtio"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.config"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </disk>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:3f:03:ce"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <target dev="tapb511e990-3b"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </interface>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/console.log" append="off"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </serial>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <video>
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </video>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </rng>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 16:57:43 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 16:57:43 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 16:57:43 compute-0 nova_compute[189459]:  </devices>
Dec  2 16:57:43 compute-0 nova_compute[189459]: </domain>
Dec  2 16:57:43 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.953 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Preparing to wait for external event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.954 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.954 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.954 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.955 189463 DEBUG nova.virt.libvirt.vif [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T16:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',id=4,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-lluuumkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T16:57:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  2 16:57:43 compute-0 nova_compute[189459]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=941718a9-628f-4f41-81e3-225760dc6a62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.955 189463 DEBUG nova.network.os_vif_util [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.956 189463 DEBUG nova.network.os_vif_util [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.956 189463 DEBUG os_vif [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.957 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.957 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.958 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.962 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.963 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb511e990-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.963 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb511e990-3b, col_values=(('external_ids', {'iface-id': 'b511e990-3b17-4177-96a7-40fc44f7937a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:03:ce', 'vm-uuid': '941718a9-628f-4f41-81e3-225760dc6a62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.966 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:43 compute-0 NetworkManager[56503]: <info>  [1764694663.9681] manager: (tapb511e990-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.969 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.979 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:43 compute-0 nova_compute[189459]: 2025-12-02 16:57:43.980 189463 INFO os_vif [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b')#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.042 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.042 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.043 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.043 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No VIF found with MAC fa:16:3e:3f:03:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.043 189463 INFO nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Using config drive#033[00m
Dec  2 16:57:44 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 16:57:43.937 189463 DEBUG nova.virt.libvirt.vif [None req-3ef3bf8c-f5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 16:57:44 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 16:57:43.955 189463 DEBUG nova.virt.libvirt.vif [None req-3ef3bf8c-f5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.356 189463 INFO nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Creating config drive at /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.config#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.364 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxh4gbizg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.506 189463 DEBUG oslo_concurrency.processutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxh4gbizg" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:57:44 compute-0 kernel: tapb511e990-3b: entered promiscuous mode
Dec  2 16:57:44 compute-0 NetworkManager[56503]: <info>  [1764694664.6052] manager: (tapb511e990-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  2 16:57:44 compute-0 ovn_controller[97975]: 2025-12-02T16:57:44Z|00045|binding|INFO|Claiming lport b511e990-3b17-4177-96a7-40fc44f7937a for this chassis.
Dec  2 16:57:44 compute-0 ovn_controller[97975]: 2025-12-02T16:57:44Z|00046|binding|INFO|b511e990-3b17-4177-96a7-40fc44f7937a: Claiming fa:16:3e:3f:03:ce 192.168.0.90
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.608 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.622 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:03:ce 192.168.0.90'], port_security=['fa:16:3e:3f:03:ce 192.168.0.90'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-port-iulc4swxkdk6', 'neutron:cidrs': '192.168.0.90/24', 'neutron:device_id': '941718a9-628f-4f41-81e3-225760dc6a62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-port-iulc4swxkdk6', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b511e990-3b17-4177-96a7-40fc44f7937a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.625 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b511e990-3b17-4177-96a7-40fc44f7937a in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d bound to our chassis#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.628 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.630 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.633 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 ovn_controller[97975]: 2025-12-02T16:57:44Z|00047|binding|INFO|Setting lport b511e990-3b17-4177-96a7-40fc44f7937a ovn-installed in OVS
Dec  2 16:57:44 compute-0 ovn_controller[97975]: 2025-12-02T16:57:44Z|00048|binding|INFO|Setting lport b511e990-3b17-4177-96a7-40fc44f7937a up in Southbound
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.638 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 systemd-udevd[243248]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.662 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[82eb93ee-b97f-4123-8b8c-c9c91ba5632f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 NetworkManager[56503]: <info>  [1764694664.6696] device (tapb511e990-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 16:57:44 compute-0 NetworkManager[56503]: <info>  [1764694664.6705] device (tapb511e990-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 16:57:44 compute-0 systemd-machined[155878]: New machine qemu-4-instance-00000004.
Dec  2 16:57:44 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.701 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[a55097e3-b1c4-4c60-8ef3-f80ae080bbb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.706 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[b791e5fc-e326-4b37-9095-892ab6b88c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.741 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[c90adc02-3aa8-46d4-9f12-197aa80b4d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.758 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[08282e64-326e-41bb-8b7e-d55214298ccb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 29479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243257, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.783 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c4d751-2785-40f7-95c6-056358d78dac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243261, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243261, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.786 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.788 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.790 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.790 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.791 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.791 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 16:57:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:57:44.792 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.933 189463 DEBUG nova.network.neutron [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updated VIF entry in instance network info cache for port b511e990-3b17-4177-96a7-40fc44f7937a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.933 189463 DEBUG nova.network.neutron [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:57:44 compute-0 nova_compute[189459]: 2025-12-02 16:57:44.949 189463 DEBUG oslo_concurrency.lockutils [req-c53d2c16-d7bd-4a64-970a-79a0c03c8b69 req-0215435f-a041-44ea-9486-6ab2b972e7e9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.197 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694665.1968303, 941718a9-628f-4f41-81e3-225760dc6a62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.199 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] VM Started (Lifecycle Event)#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.223 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.233 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694665.1969755, 941718a9-628f-4f41-81e3-225760dc6a62 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.234 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] VM Paused (Lifecycle Event)#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.252 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.256 189463 DEBUG nova.compute.manager [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.257 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.257 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.258 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.258 189463 DEBUG nova.compute.manager [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Processing event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.258 189463 DEBUG nova.compute.manager [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.259 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.259 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.260 189463 DEBUG oslo_concurrency.lockutils [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.260 189463 DEBUG nova.compute.manager [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] No waiting events found dispatching network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.260 189463 WARNING nova.compute.manager [req-68606684-97f6-4b32-b780-58dfe2fb5ddf req-b310df82-9aeb-4ff8-9aa1-03b697f21bc2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received unexpected event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a for instance with vm_state building and task_state spawning.#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.261 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.269 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764694665.267433, 941718a9-628f-4f41-81e3-225760dc6a62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.269 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] VM Resumed (Lifecycle Event)#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.271 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.278 189463 INFO nova.virt.libvirt.driver [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Instance spawned successfully.#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.279 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.289 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.300 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.306 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.306 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.306 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.307 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.307 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.308 189463 DEBUG nova.virt.libvirt.driver [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.338 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.377 189463 INFO nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Took 3.47 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.378 189463 DEBUG nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.431 189463 INFO nova.compute.manager [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Took 3.98 seconds to build instance.#033[00m
Dec  2 16:57:45 compute-0 nova_compute[189459]: 2025-12-02 16:57:45.445 189463 DEBUG oslo_concurrency.lockutils [None req-3ef3bf8c-f598-4aac-8b83-ab17c8f3885e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:57:47 compute-0 nova_compute[189459]: 2025-12-02 16:57:47.722 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:48 compute-0 nova_compute[189459]: 2025-12-02 16:57:48.968 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:51 compute-0 podman[243274]: 2025-12-02 16:57:51.286406048 +0000 UTC m=+0.103462106 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 16:57:52 compute-0 nova_compute[189459]: 2025-12-02 16:57:52.726 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:53 compute-0 nova_compute[189459]: 2025-12-02 16:57:53.971 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:57 compute-0 nova_compute[189459]: 2025-12-02 16:57:57.728 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:58 compute-0 nova_compute[189459]: 2025-12-02 16:57:58.974 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:57:59 compute-0 podman[243294]: 2025-12-02 16:57:59.324907756 +0000 UTC m=+0.127725592 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 16:57:59 compute-0 podman[243293]: 2025-12-02 16:57:59.338308243 +0000 UTC m=+0.150179330 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  2 16:57:59 compute-0 podman[203941]: time="2025-12-02T16:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:57:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:57:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4768 "" "Go-http-client/1.1"
Dec  2 16:58:01 compute-0 podman[243332]: 2025-12-02 16:58:01.278550547 +0000 UTC m=+0.097593810 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: ERROR   16:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: ERROR   16:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: ERROR   16:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: ERROR   16:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: ERROR   16:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:58:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:58:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:58:01.864 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:58:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:58:01.864 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:58:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:58:01.865 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:58:02 compute-0 nova_compute[189459]: 2025-12-02 16:58:02.730 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:03 compute-0 podman[243353]: 2025-12-02 16:58:03.277535585 +0000 UTC m=+0.096439899 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:58:03 compute-0 podman[243352]: 2025-12-02 16:58:03.283081233 +0000 UTC m=+0.105780968 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=)
Dec  2 16:58:03 compute-0 nova_compute[189459]: 2025-12-02 16:58:03.978 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:07 compute-0 nova_compute[189459]: 2025-12-02 16:58:07.734 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:08 compute-0 nova_compute[189459]: 2025-12-02 16:58:08.982 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:10 compute-0 podman[243389]: 2025-12-02 16:58:10.289518579 +0000 UTC m=+0.107461981 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:58:10 compute-0 podman[243390]: 2025-12-02 16:58:10.305990588 +0000 UTC m=+0.110649167 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:58:10 compute-0 podman[243388]: 2025-12-02 16:58:10.344504533 +0000 UTC m=+0.170280314 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 16:58:12 compute-0 nova_compute[189459]: 2025-12-02 16:58:12.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:12 compute-0 nova_compute[189459]: 2025-12-02 16:58:12.736 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:13 compute-0 nova_compute[189459]: 2025-12-02 16:58:13.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:13 compute-0 nova_compute[189459]: 2025-12-02 16:58:13.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:14 compute-0 ovn_controller[97975]: 2025-12-02T16:58:14Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  2 16:58:15 compute-0 nova_compute[189459]: 2025-12-02 16:58:15.433 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:16 compute-0 nova_compute[189459]: 2025-12-02 16:58:16.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.441 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.441 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.733 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.755 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.826 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.827 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.915 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.917 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.986 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:17 compute-0 nova_compute[189459]: 2025-12-02 16:58:17.988 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.052 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.064 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.124 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.126 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 ovn_controller[97975]: 2025-12-02T16:58:18Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:03:ce 192.168.0.90
Dec  2 16:58:18 compute-0 ovn_controller[97975]: 2025-12-02T16:58:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:03:ce 192.168.0.90
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.187 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.188 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.252 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.254 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.349 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.359 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.434 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.435 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.529 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.530 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.608 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.609 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.669 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.677 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.752 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.753 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.813 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.815 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.883 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.884 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.952 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:58:18 compute-0 nova_compute[189459]: 2025-12-02 16:58:18.989 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.363 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.365 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4722MB free_disk=72.15660858154297GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.366 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.367 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.453 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.453 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.454 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.454 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.454 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.454 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.572 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.598 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.647 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:58:19 compute-0 nova_compute[189459]: 2025-12-02 16:58:19.648 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:58:20 compute-0 nova_compute[189459]: 2025-12-02 16:58:20.648 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:20 compute-0 nova_compute[189459]: 2025-12-02 16:58:20.649 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:58:21 compute-0 nova_compute[189459]: 2025-12-02 16:58:21.738 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:58:21 compute-0 nova_compute[189459]: 2025-12-02 16:58:21.739 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:58:21 compute-0 nova_compute[189459]: 2025-12-02 16:58:21.746 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:58:22 compute-0 podman[243527]: 2025-12-02 16:58:22.239909587 +0000 UTC m=+0.065653589 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 16:58:22 compute-0 nova_compute[189459]: 2025-12-02 16:58:22.741 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:23 compute-0 nova_compute[189459]: 2025-12-02 16:58:23.993 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.791 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updating instance_info_cache with network_info: [{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.818 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.818 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.819 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.820 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.820 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:58:24 compute-0 nova_compute[189459]: 2025-12-02 16:58:24.821 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:58:27 compute-0 nova_compute[189459]: 2025-12-02 16:58:27.744 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:28 compute-0 nova_compute[189459]: 2025-12-02 16:58:28.997 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:29 compute-0 podman[203941]: time="2025-12-02T16:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:58:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:58:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  2 16:58:30 compute-0 podman[243547]: 2025-12-02 16:58:30.32123703 +0000 UTC m=+0.149043859 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 16:58:30 compute-0 podman[243548]: 2025-12-02 16:58:30.323882001 +0000 UTC m=+0.132335295 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: ERROR   16:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: ERROR   16:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: ERROR   16:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: ERROR   16:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: ERROR   16:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:58:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:58:32 compute-0 podman[243589]: 2025-12-02 16:58:32.297609168 +0000 UTC m=+0.117705035 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3)
Dec  2 16:58:32 compute-0 nova_compute[189459]: 2025-12-02 16:58:32.749 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:34 compute-0 nova_compute[189459]: 2025-12-02 16:58:34.000 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:34 compute-0 podman[243608]: 2025-12-02 16:58:34.309611914 +0000 UTC m=+0.106866097 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 16:58:34 compute-0 podman[243607]: 2025-12-02 16:58:34.320811612 +0000 UTC m=+0.120606042 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  2 16:58:37 compute-0 nova_compute[189459]: 2025-12-02 16:58:37.752 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:39 compute-0 nova_compute[189459]: 2025-12-02 16:58:39.003 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:41 compute-0 podman[243647]: 2025-12-02 16:58:41.27087668 +0000 UTC m=+0.088052386 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 16:58:41 compute-0 podman[243648]: 2025-12-02 16:58:41.271575448 +0000 UTC m=+0.082668472 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 16:58:41 compute-0 podman[243646]: 2025-12-02 16:58:41.329680026 +0000 UTC m=+0.144429517 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  2 16:58:42 compute-0 nova_compute[189459]: 2025-12-02 16:58:42.755 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:44 compute-0 nova_compute[189459]: 2025-12-02 16:58:44.005 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:47 compute-0 nova_compute[189459]: 2025-12-02 16:58:47.759 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:49 compute-0 nova_compute[189459]: 2025-12-02 16:58:49.009 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:52 compute-0 nova_compute[189459]: 2025-12-02 16:58:52.763 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:53 compute-0 podman[243718]: 2025-12-02 16:58:53.281571202 +0000 UTC m=+0.109513848 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41)
Dec  2 16:58:54 compute-0 nova_compute[189459]: 2025-12-02 16:58:54.013 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:57 compute-0 nova_compute[189459]: 2025-12-02 16:58:57.767 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:59 compute-0 nova_compute[189459]: 2025-12-02 16:58:59.017 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:58:59 compute-0 podman[203941]: time="2025-12-02T16:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:58:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:58:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  2 16:59:01 compute-0 podman[243740]: 2025-12-02 16:59:01.283669502 +0000 UTC m=+0.101323590 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:59:01 compute-0 podman[243741]: 2025-12-02 16:59:01.299990299 +0000 UTC m=+0.114119313 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: ERROR   16:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: ERROR   16:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: ERROR   16:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: ERROR   16:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: ERROR   16:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:59:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:59:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:59:01.865 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:59:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:59:01.866 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:59:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 16:59:01.868 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:59:02 compute-0 nova_compute[189459]: 2025-12-02 16:59:02.770 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.049 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.049 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.058 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '839e5006-8465-4d21-8287-0bba4f28a358', 'name': 'vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.069 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'name': 'vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.071 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 941718a9-628f-4f41-81e3-225760dc6a62 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.072 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/941718a9-628f-4f41-81e3-225760dc6a62 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 16:59:03 compute-0 podman[243780]: 2025-12-02 16:59:03.249953517 +0000 UTC m=+0.085517067 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.910 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 02 Dec 2025 16:59:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-203e6371-3eb2-4547-8eed-824b9390dbde x-openstack-request-id: req-203e6371-3eb2-4547-8eed-824b9390dbde _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.910 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "941718a9-628f-4f41-81e3-225760dc6a62", "name": "vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz", "status": "ACTIVE", "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "user_id": "91c12bcb1ad14b95b1bdedf7527f1adf", "metadata": {"metering.server_group": "a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea"}, "hostId": "037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059", "image": {"id": "5b0e8045-c81c-486a-86d2-bf0e0fd17a5a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5b0e8045-c81c-486a-86d2-bf0e0fd17a5a"}]}, "flavor": {"id": "8aba0aff-301c-4123-b0dc-aba3acd2a3ad", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8aba0aff-301c-4123-b0dc-aba3acd2a3ad"}]}, "created": "2025-12-02T16:57:39Z", "updated": "2025-12-02T16:57:45Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.90", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:03:ce"}, {"version": 4, "addr": "192.168.122.185", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:03:ce"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/941718a9-628f-4f41-81e3-225760dc6a62"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/941718a9-628f-4f41-81e3-225760dc6a62"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T16:57:45.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.910 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/941718a9-628f-4f41-81e3-225760dc6a62 used request id req-203e6371-3eb2-4547-8eed-824b9390dbde request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.912 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '941718a9-628f-4f41-81e3-225760dc6a62', 'name': 'vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.913 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T16:59:03.913647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.920 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.925 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.929 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.934 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 941718a9-628f-4f41-81e3-225760dc6a62 / tapb511e990-3b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.935 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.936 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.937 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.938 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T16:59:03.936716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.938 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.939 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.939 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T16:59:03.939961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.971 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.972 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.972 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.997 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.998 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:03.998 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 nova_compute[189459]: 2025-12-02 16:59:04.020 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.027 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.027 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.027 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.050 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.050 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.050 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.051 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T16:59:04.052131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.075 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 39010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.093 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/cpu volume: 327060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.117 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/cpu volume: 35000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.146 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/cpu volume: 32800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T16:59:04.148303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.240 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.240 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.241 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.356 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.357 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.358 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.479 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.479 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.480 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.617 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.619 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.620 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.623 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.624 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.625 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.625 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 418871740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.626 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 75002437 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T16:59:04.623101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.627 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 69536833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.627 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 412604943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.628 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 86706146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.629 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 66308231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.629 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 717183131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.630 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 81550079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.630 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 63467364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.632 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.633 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.634 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T16:59:04.633120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.635 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.635 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.635 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.636 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.636 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.636 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.637 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.637 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.637 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.637 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.639 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.640 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T16:59:04.639506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.640 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.640 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.641 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.641 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.641 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.641 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.642 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.642 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.642 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.643 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.644 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T16:59:04.644652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.645 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.645 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.645 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.646 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.646 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.646 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.647 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.647 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.647 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.647 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.648 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.649 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.650 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.650 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.651 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.651 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.651 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.652 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.652 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.653 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.653 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.653 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T16:59:04.649330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.655 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.656 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.656 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.657 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 1357245475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.657 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 9551865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.657 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.658 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 1373521669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T16:59:04.655790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.658 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 12454002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.659 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.659 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 709154876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.659 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 8231189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.660 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.660 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.661 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.662 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.662 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.663 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.664 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.664 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.665 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.665 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.666 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.666 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.666 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T16:59:04.661292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.667 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T16:59:04.664240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.668 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.668 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.668 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.669 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T16:59:04.670535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.670 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.671 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.671 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.671 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T16:59:04.672558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.672 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz>]
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T16:59:04.673660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.675 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.675 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.675 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.676 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T16:59:04.675154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.676 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T16:59:04.677218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.678 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.679 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.679 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.679 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T16:59:04.678780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T16:59:04.680878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.681 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.681 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.681 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.681 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.682 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.683 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes volume: 7634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.683 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.683 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T16:59:04.682709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.685 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T16:59:04.684843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.685 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.685 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.685 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.686 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T16:59:04.686651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.687 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.687 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.687 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T16:59:04.688252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.688 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz>]
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.690 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.691 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T16:59:04.690918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.691 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.692 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.692 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.693 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.694 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T16:59:04.693731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.694 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.695 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 16:59:04.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 16:59:05 compute-0 podman[243798]: 2025-12-02 16:59:05.29256135 +0000 UTC m=+0.105567614 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Dec  2 16:59:05 compute-0 podman[243799]: 2025-12-02 16:59:05.311807864 +0000 UTC m=+0.114340418 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  2 16:59:07 compute-0 nova_compute[189459]: 2025-12-02 16:59:07.772 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:09 compute-0 nova_compute[189459]: 2025-12-02 16:59:09.023 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:12 compute-0 podman[243841]: 2025-12-02 16:59:12.26818276 +0000 UTC m=+0.085524318 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 16:59:12 compute-0 podman[243835]: 2025-12-02 16:59:12.269033782 +0000 UTC m=+0.094987850 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 16:59:12 compute-0 podman[243834]: 2025-12-02 16:59:12.31605351 +0000 UTC m=+0.148962154 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 16:59:12 compute-0 nova_compute[189459]: 2025-12-02 16:59:12.775 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:13 compute-0 nova_compute[189459]: 2025-12-02 16:59:13.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:14 compute-0 nova_compute[189459]: 2025-12-02 16:59:14.027 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:16 compute-0 nova_compute[189459]: 2025-12-02 16:59:16.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:17 compute-0 nova_compute[189459]: 2025-12-02 16:59:17.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:17 compute-0 nova_compute[189459]: 2025-12-02 16:59:17.777 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:18 compute-0 nova_compute[189459]: 2025-12-02 16:59:18.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.031 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.458 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.459 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.460 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.460 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.705 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.767 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.768 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.830 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.831 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.892 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.893 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.973 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:19 compute-0 nova_compute[189459]: 2025-12-02 16:59:19.982 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.062 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.063 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.127 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.128 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.214 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.215 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.331 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.341 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.408 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.409 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.485 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.486 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.577 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.578 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.671 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.678 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.738 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.741 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.809 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.810 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.874 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.877 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 16:59:20 compute-0 nova_compute[189459]: 2025-12-02 16:59:20.939 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.416 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.417 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4676MB free_disk=72.13630294799805GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.418 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.418 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.531 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.532 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.532 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.532 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.532 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.532 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.616 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.654 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.656 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 16:59:21 compute-0 nova_compute[189459]: 2025-12-02 16:59:21.656 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 16:59:22 compute-0 nova_compute[189459]: 2025-12-02 16:59:22.656 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:22 compute-0 nova_compute[189459]: 2025-12-02 16:59:22.657 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 16:59:22 compute-0 nova_compute[189459]: 2025-12-02 16:59:22.657 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 16:59:22 compute-0 nova_compute[189459]: 2025-12-02 16:59:22.781 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:23 compute-0 nova_compute[189459]: 2025-12-02 16:59:23.770 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 16:59:23 compute-0 nova_compute[189459]: 2025-12-02 16:59:23.770 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 16:59:23 compute-0 nova_compute[189459]: 2025-12-02 16:59:23.771 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 16:59:23 compute-0 nova_compute[189459]: 2025-12-02 16:59:23.771 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 16:59:24 compute-0 nova_compute[189459]: 2025-12-02 16:59:24.034 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:24 compute-0 podman[243954]: 2025-12-02 16:59:24.281557575 +0000 UTC m=+0.107099164 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41)
Dec  2 16:59:25 compute-0 nova_compute[189459]: 2025-12-02 16:59:25.993 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 16:59:26 compute-0 nova_compute[189459]: 2025-12-02 16:59:26.011 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 16:59:26 compute-0 nova_compute[189459]: 2025-12-02 16:59:26.012 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 16:59:26 compute-0 nova_compute[189459]: 2025-12-02 16:59:26.013 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:26 compute-0 nova_compute[189459]: 2025-12-02 16:59:26.013 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 16:59:26 compute-0 nova_compute[189459]: 2025-12-02 16:59:26.013 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 16:59:27 compute-0 nova_compute[189459]: 2025-12-02 16:59:27.783 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:29 compute-0 nova_compute[189459]: 2025-12-02 16:59:29.037 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:29 compute-0 podman[203941]: time="2025-12-02T16:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:59:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:59:29 compute-0 podman[203941]: @ - - [02/Dec/2025:16:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: ERROR   16:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: ERROR   16:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: ERROR   16:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: ERROR   16:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: ERROR   16:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 16:59:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 16:59:32 compute-0 podman[243976]: 2025-12-02 16:59:32.277724034 +0000 UTC m=+0.100763915 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 16:59:32 compute-0 podman[243977]: 2025-12-02 16:59:32.28354626 +0000 UTC m=+0.097441667 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  2 16:59:32 compute-0 nova_compute[189459]: 2025-12-02 16:59:32.786 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:34 compute-0 nova_compute[189459]: 2025-12-02 16:59:34.041 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:34 compute-0 podman[244015]: 2025-12-02 16:59:34.297963199 +0000 UTC m=+0.100700744 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 16:59:36 compute-0 podman[244035]: 2025-12-02 16:59:36.275731819 +0000 UTC m=+0.083470303 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 16:59:36 compute-0 podman[244034]: 2025-12-02 16:59:36.28176874 +0000 UTC m=+0.099107420 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Dec  2 16:59:37 compute-0 nova_compute[189459]: 2025-12-02 16:59:37.790 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:39 compute-0 nova_compute[189459]: 2025-12-02 16:59:39.045 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:42 compute-0 nova_compute[189459]: 2025-12-02 16:59:42.793 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:43 compute-0 podman[244079]: 2025-12-02 16:59:43.267476306 +0000 UTC m=+0.076302331 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:59:43 compute-0 podman[244072]: 2025-12-02 16:59:43.304753123 +0000 UTC m=+0.129827062 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Dec  2 16:59:43 compute-0 podman[244073]: 2025-12-02 16:59:43.306641014 +0000 UTC m=+0.124005967 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 16:59:44 compute-0 nova_compute[189459]: 2025-12-02 16:59:44.049 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 16:59:47 compute-0 nova_compute[189459]: 2025-12-02 16:59:47.795 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:49 compute-0 nova_compute[189459]: 2025-12-02 16:59:49.052 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:52 compute-0 nova_compute[189459]: 2025-12-02 16:59:52.799 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:54 compute-0 nova_compute[189459]: 2025-12-02 16:59:54.055 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:55 compute-0 podman[244148]: 2025-12-02 16:59:55.333753881 +0000 UTC m=+0.158623422 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Dec  2 16:59:57 compute-0 nova_compute[189459]: 2025-12-02 16:59:57.801 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:59 compute-0 nova_compute[189459]: 2025-12-02 16:59:59.059 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 16:59:59 compute-0 podman[203941]: time="2025-12-02T16:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 16:59:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 16:59:59 compute-0 podman[203941]: @ - - [02/Dec/2025:16:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4773 "" "Go-http-client/1.1"
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: ERROR   17:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: ERROR   17:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: ERROR   17:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: ERROR   17:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: ERROR   17:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:00:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:00:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:00:01.866 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:00:01.867 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:00:01.867 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:02 compute-0 nova_compute[189459]: 2025-12-02 17:00:02.805 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:03 compute-0 podman[244169]: 2025-12-02 17:00:03.269163557 +0000 UTC m=+0.090540862 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  2 17:00:03 compute-0 podman[244168]: 2025-12-02 17:00:03.283766038 +0000 UTC m=+0.104336081 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:00:04 compute-0 nova_compute[189459]: 2025-12-02 17:00:04.062 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:05 compute-0 podman[244206]: 2025-12-02 17:00:05.261225767 +0000 UTC m=+0.080354369 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi)
Dec  2 17:00:07 compute-0 podman[244225]: 2025-12-02 17:00:07.274328721 +0000 UTC m=+0.093616464 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Dec  2 17:00:07 compute-0 podman[244226]: 2025-12-02 17:00:07.303011258 +0000 UTC m=+0.122440114 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:00:07 compute-0 nova_compute[189459]: 2025-12-02 17:00:07.808 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:09 compute-0 nova_compute[189459]: 2025-12-02 17:00:09.065 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:12 compute-0 nova_compute[189459]: 2025-12-02 17:00:12.811 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:14 compute-0 nova_compute[189459]: 2025-12-02 17:00:14.069 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:14 compute-0 podman[244265]: 2025-12-02 17:00:14.280204937 +0000 UTC m=+0.090506870 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:00:14 compute-0 podman[244264]: 2025-12-02 17:00:14.297623713 +0000 UTC m=+0.112543390 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:00:14 compute-0 podman[244263]: 2025-12-02 17:00:14.333872262 +0000 UTC m=+0.158699274 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:00:14 compute-0 nova_compute[189459]: 2025-12-02 17:00:14.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:17 compute-0 nova_compute[189459]: 2025-12-02 17:00:17.814 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.442 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.442 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.443 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.443 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.464 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:00:18 compute-0 nova_compute[189459]: 2025-12-02 17:00:18.464 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:19 compute-0 nova_compute[189459]: 2025-12-02 17:00:19.072 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:19 compute-0 nova_compute[189459]: 2025-12-02 17:00:19.425 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:19 compute-0 nova_compute[189459]: 2025-12-02 17:00:19.426 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.432 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.460 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.461 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.461 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.462 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.583 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.651 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.652 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.750 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.752 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.834 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.835 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.952 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:21 compute-0 nova_compute[189459]: 2025-12-02 17:00:21.967 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.060 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.062 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.164 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.166 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.230 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.232 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.293 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.307 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.371 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.372 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.471 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.472 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.565 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.566 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.643 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.654 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.740 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.742 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.816 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.832 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.833 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.926 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.930 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:00:22 compute-0 nova_compute[189459]: 2025-12-02 17:00:22.993 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.551 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.553 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4625MB free_disk=72.13628387451172GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.554 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.555 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.852 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.852 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.853 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.854 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.855 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.855 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:00:23 compute-0 nova_compute[189459]: 2025-12-02 17:00:23.944 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.048 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.049 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.069 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.075 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.095 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.197 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.212 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.213 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:00:24 compute-0 nova_compute[189459]: 2025-12-02 17:00:24.214 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:25 compute-0 nova_compute[189459]: 2025-12-02 17:00:25.193 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:25 compute-0 nova_compute[189459]: 2025-12-02 17:00:25.193 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:00:25 compute-0 nova_compute[189459]: 2025-12-02 17:00:25.813 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:00:25 compute-0 nova_compute[189459]: 2025-12-02 17:00:25.813 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:00:25 compute-0 nova_compute[189459]: 2025-12-02 17:00:25.813 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:00:26 compute-0 podman[244381]: 2025-12-02 17:00:26.280312544 +0000 UTC m=+0.078217491 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.220 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [{"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.275 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.276 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.276 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.277 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.277 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:00:27 compute-0 nova_compute[189459]: 2025-12-02 17:00:27.821 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:29 compute-0 nova_compute[189459]: 2025-12-02 17:00:29.078 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:29 compute-0 podman[203941]: time="2025-12-02T17:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:00:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:00:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: ERROR   17:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: ERROR   17:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: ERROR   17:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: ERROR   17:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: ERROR   17:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:00:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:00:32 compute-0 nova_compute[189459]: 2025-12-02 17:00:32.823 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:34 compute-0 nova_compute[189459]: 2025-12-02 17:00:34.081 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:34 compute-0 podman[244403]: 2025-12-02 17:00:34.289240027 +0000 UTC m=+0.101288399 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  2 17:00:34 compute-0 podman[244404]: 2025-12-02 17:00:34.307579877 +0000 UTC m=+0.114782200 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:00:36 compute-0 podman[244440]: 2025-12-02 17:00:36.280844166 +0000 UTC m=+0.084363707 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.731 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.781 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.782 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid 839e5006-8465-4d21-8287-0bba4f28a358 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.783 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid c3d793a6-79d5-4b91-ac80-9ac02a5d36ce _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.783 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid 941718a9-628f-4f41-81e3-225760dc6a62 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.784 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.785 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.786 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.789 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "839e5006-8465-4d21-8287-0bba4f28a358" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.790 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.791 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.792 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.793 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "941718a9-628f-4f41-81e3-225760dc6a62" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.826 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.832 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.872 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "839e5006-8465-4d21-8287-0bba4f28a358" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.875 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:37 compute-0 nova_compute[189459]: 2025-12-02 17:00:37.890 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "941718a9-628f-4f41-81e3-225760dc6a62" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:00:38 compute-0 podman[244460]: 2025-12-02 17:00:38.29961827 +0000 UTC m=+0.124362986 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, architecture=x86_64, release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:00:38 compute-0 podman[244461]: 2025-12-02 17:00:38.302040365 +0000 UTC m=+0.109257862 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:00:39 compute-0 nova_compute[189459]: 2025-12-02 17:00:39.083 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:42 compute-0 nova_compute[189459]: 2025-12-02 17:00:42.829 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:44 compute-0 nova_compute[189459]: 2025-12-02 17:00:44.086 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:44 compute-0 podman[244500]: 2025-12-02 17:00:44.80870414 +0000 UTC m=+0.112139299 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:00:44 compute-0 podman[244501]: 2025-12-02 17:00:44.83153786 +0000 UTC m=+0.126491263 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:00:44 compute-0 podman[244499]: 2025-12-02 17:00:44.836063451 +0000 UTC m=+0.138258617 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 17:00:47 compute-0 nova_compute[189459]: 2025-12-02 17:00:47.832 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:49 compute-0 nova_compute[189459]: 2025-12-02 17:00:49.089 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:52 compute-0 nova_compute[189459]: 2025-12-02 17:00:52.835 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:54 compute-0 nova_compute[189459]: 2025-12-02 17:00:54.091 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:57 compute-0 podman[244571]: 2025-12-02 17:00:57.268216771 +0000 UTC m=+0.089012760 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:00:57 compute-0 nova_compute[189459]: 2025-12-02 17:00:57.839 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:59 compute-0 nova_compute[189459]: 2025-12-02 17:00:59.095 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:00:59 compute-0 podman[203941]: time="2025-12-02T17:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:00:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:00:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: ERROR   17:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: ERROR   17:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: ERROR   17:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: ERROR   17:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: ERROR   17:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:01:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:01:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:01.867 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:01.869 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:01.870 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:02 compute-0 nova_compute[189459]: 2025-12-02 17:01:02.842 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.050 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.051 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.057 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.061 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '839e5006-8465-4d21-8287-0bba4f28a358', 'name': 'vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.064 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'name': 'vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '941718a9-628f-4f41-81e3-225760dc6a62', 'name': 'vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:01:03.068211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.071 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.075 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.078 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.083 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.084 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.085 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.085 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.085 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:01:03.084667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:01:03.086660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.118 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.119 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.119 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.145 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.145 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.146 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.176 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.177 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.177 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.207 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.208 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.208 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.209 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:01:03.209965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.238 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 40630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.273 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/cpu volume: 328680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.310 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/cpu volume: 36640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.339 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/cpu volume: 34450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:01:03.341310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.436 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.437 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.438 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.518 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.519 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.519 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.582 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.583 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.584 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.666 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.667 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.667 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.668 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.669 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.669 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.670 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.670 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 418871740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.671 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 75002437 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.671 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.latency volume: 69536833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.672 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 412604943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.672 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 86706146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:01:03.669067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.672 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 66308231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.673 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 717183131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.673 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 81550079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.673 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 63467364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.675 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.676 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:01:03.675577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.676 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.676 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.677 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.677 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.677 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.677 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.678 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.678 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.678 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.679 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.679 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.681 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.681 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.681 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.682 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:01:03.680986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.682 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.683 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.683 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.683 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.683 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.684 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.684 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.684 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.685 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.685 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.686 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.686 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.686 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.687 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:01:03.686039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.687 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.689 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.689 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.690 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.690 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.690 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.690 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.691 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.692 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.693 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.693 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.693 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.693 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.694 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.694 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.694 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.694 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.695 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.695 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.695 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.697 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.697 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.697 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.697 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.698 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 1357245475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.698 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 9551865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.698 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.698 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 1373521669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.698 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 12454002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.699 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.699 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 709154876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.699 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 8231189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.699 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.700 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.701 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.701 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.701 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.701 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:01:03.692620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:01:03.697081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:01:03.701046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.703 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.704 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.704 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.704 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.704 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.705 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.705 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.705 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.705 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:01:03.703151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.707 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.707 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.708 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.708 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:01:03.707077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.711 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:01:03.709491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:01:03.710977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.713 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.714 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.714 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.715 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:01:03.712921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.715 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:01:03.714338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.716 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:01:03.716740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.717 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.717 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.719 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.719 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes volume: 7704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.719 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.719 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.720 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.721 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.721 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.721 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.722 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:01:03.718940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:01:03.720580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:01:03.722566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.723 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:01:03.725339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.725 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.726 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.726 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:01:03.727209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.compute.pollsters [-] 839e5006-8465-4d21-8287-0bba4f28a358/network.outgoing.packets volume: 68 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.727 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.728 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:01:03.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:01:04 compute-0 nova_compute[189459]: 2025-12-02 17:01:04.098 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:05 compute-0 podman[244607]: 2025-12-02 17:01:05.285024858 +0000 UTC m=+0.110446404 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:01:05 compute-0 podman[244608]: 2025-12-02 17:01:05.295275372 +0000 UTC m=+0.116550627 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 17:01:07 compute-0 podman[244644]: 2025-12-02 17:01:07.262253442 +0000 UTC m=+0.074604576 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:01:07 compute-0 nova_compute[189459]: 2025-12-02 17:01:07.845 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:09 compute-0 nova_compute[189459]: 2025-12-02 17:01:09.102 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:09 compute-0 podman[244664]: 2025-12-02 17:01:09.235752536 +0000 UTC m=+0.064317090 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  2 17:01:09 compute-0 podman[244663]: 2025-12-02 17:01:09.271768729 +0000 UTC m=+0.097950399 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:01:12 compute-0 nova_compute[189459]: 2025-12-02 17:01:12.847 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:14 compute-0 nova_compute[189459]: 2025-12-02 17:01:14.105 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:15 compute-0 podman[244702]: 2025-12-02 17:01:15.255004559 +0000 UTC m=+0.069004396 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:01:15 compute-0 podman[244701]: 2025-12-02 17:01:15.273971796 +0000 UTC m=+0.092848304 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:01:15 compute-0 podman[244700]: 2025-12-02 17:01:15.32123426 +0000 UTC m=+0.145553033 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:01:16 compute-0 nova_compute[189459]: 2025-12-02 17:01:16.473 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:17 compute-0 nova_compute[189459]: 2025-12-02 17:01:17.849 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:19 compute-0 nova_compute[189459]: 2025-12-02 17:01:19.112 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:19 compute-0 nova_compute[189459]: 2025-12-02 17:01:19.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:19 compute-0 nova_compute[189459]: 2025-12-02 17:01:19.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:20 compute-0 nova_compute[189459]: 2025-12-02 17:01:20.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:22 compute-0 nova_compute[189459]: 2025-12-02 17:01:22.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:22 compute-0 nova_compute[189459]: 2025-12-02 17:01:22.853 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:23 compute-0 nova_compute[189459]: 2025-12-02 17:01:23.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:23 compute-0 nova_compute[189459]: 2025-12-02 17:01:23.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:01:23 compute-0 nova_compute[189459]: 2025-12-02 17:01:23.901 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:01:23 compute-0 nova_compute[189459]: 2025-12-02 17:01:23.901 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:01:23 compute-0 nova_compute[189459]: 2025-12-02 17:01:23.902 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:01:24 compute-0 nova_compute[189459]: 2025-12-02 17:01:24.114 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.755 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updating instance_info_cache with network_info: [{"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.771 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.771 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.772 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.773 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.773 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.773 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.804 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.805 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.805 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.806 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:01:25 compute-0 nova_compute[189459]: 2025-12-02 17:01:25.951 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.052 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.053 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.117 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.119 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.192 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.194 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.256 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.264 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.323 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.325 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.406 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.407 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.492 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.493 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.550 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.558 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.644 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.645 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.702 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.703 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.758 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.759 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.822 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.829 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.891 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.892 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.959 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:26 compute-0 nova_compute[189459]: 2025-12-02 17:01:26.961 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.025 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.026 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.109 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.564 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.565 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=72.13635635375977GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.565 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.565 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.691 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.692 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 839e5006-8465-4d21-8287-0bba4f28a358 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.692 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.692 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.692 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.692 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.800 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.828 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.836 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.837 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:27 compute-0 nova_compute[189459]: 2025-12-02 17:01:27.855 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:28 compute-0 podman[244822]: 2025-12-02 17:01:28.294019816 +0000 UTC m=+0.109768916 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.117 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.278 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.279 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.279 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.279 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.280 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.281 189463 INFO nova.compute.manager [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Terminating instance#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.283 189463 DEBUG nova.compute.manager [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:01:29 compute-0 kernel: tap14dc4429-05 (unregistering): left promiscuous mode
Dec  2 17:01:29 compute-0 NetworkManager[56503]: <info>  [1764694889.3302] device (tap14dc4429-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:01:29 compute-0 ovn_controller[97975]: 2025-12-02T17:01:29Z|00050|binding|INFO|Releasing lport 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 from this chassis (sb_readonly=0)
Dec  2 17:01:29 compute-0 ovn_controller[97975]: 2025-12-02T17:01:29Z|00051|binding|INFO|Setting lport 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 down in Southbound
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.343 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 ovn_controller[97975]: 2025-12-02T17:01:29Z|00052|binding|INFO|Removing iface tap14dc4429-05 ovn-installed in OVS
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.358 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.374 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:39:f2 192.168.0.6'], port_security=['fa:16:3e:de:39:f2 192.168.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-port-nmqrzuzw3ryx', 'neutron:cidrs': '192.168.0.6/24', 'neutron:device_id': '839e5006-8465-4d21-8287-0bba4f28a358', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-port-nmqrzuzw3ryx', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=14dc4429-05ef-4ac6-9fa4-500c0ce93c01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.377 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d unbound from our chassis#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.379 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 17:01:29 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  2 17:01:29 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 39.499s CPU time.
Dec  2 17:01:29 compute-0 systemd-machined[155878]: Machine qemu-2-instance-00000002 terminated.
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.394 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f56ac2ca-1489-4d79-8104-26fbb42061f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.423 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[dd963201-5c0c-44e4-8808-057ea263e151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.425 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[fad24bb3-eaef-4398-a265-284769f70949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.450 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[c15cb1e4-8336-47f4-aeb9-c601adea108c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.468 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[aab83b4f-d5dc-446a-9d09-961c7ee322cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 44219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244856, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.486 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[32bef8f1-25d4-4585-a439-f578d8472fdc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244857, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244857, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.487 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.489 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.495 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.496 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.496 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.497 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:01:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:29.497 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.516 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.525 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.585 189463 INFO nova.virt.libvirt.driver [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance destroyed successfully.#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.585 189463 DEBUG nova.objects.instance [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'resources' on Instance uuid 839e5006-8465-4d21-8287-0bba4f28a358 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.602 189463 DEBUG nova.virt.libvirt.vif [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T16:51:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-rpqbvuu5j44c-ihexdhw3efvn-vnf-5jnu27lkpn5d',id=2,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T16:51:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-auysmsw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T16:51:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  2 17:01:29 compute-0 nova_compute[189459]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Nzk3MzExMzA4MzIzMTYzOTMwMj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5NzMxMTMwODMyMzE2MzkzMDI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTczMTEzMDgzMjMxNjM5MzAyPT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=839e5006-8465-4d21-8287-0bba4f28a358,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.602 189463 DEBUG nova.network.os_vif_util [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "address": "fa:16:3e:de:39:f2", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14dc4429-05", "ovs_interfaceid": "14dc4429-05ef-4ac6-9fa4-500c0ce93c01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.603 189463 DEBUG nova.network.os_vif_util [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.603 189463 DEBUG os_vif [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.606 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.606 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14dc4429-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.609 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.612 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.615 189463 INFO os_vif [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:39:f2,bridge_name='br-int',has_traffic_filtering=True,id=14dc4429-05ef-4ac6-9fa4-500c0ce93c01,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap14dc4429-05')#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.615 189463 INFO nova.virt.libvirt.driver [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Deleting instance files /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358_del#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.616 189463 INFO nova.virt.libvirt.driver [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Deletion of /var/lib/nova/instances/839e5006-8465-4d21-8287-0bba4f28a358_del complete#033[00m
Dec  2 17:01:29 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 17:01:29.602 189463 DEBUG nova.virt.libvirt.vif [None req-c167df25-d0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.696 189463 DEBUG nova.virt.libvirt.host [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.697 189463 INFO nova.virt.libvirt.host [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] UEFI support detected#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.702 189463 INFO nova.compute.manager [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.703 189463 DEBUG oslo.service.loopingcall [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.704 189463 DEBUG nova.compute.manager [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:01:29 compute-0 nova_compute[189459]: 2025-12-02 17:01:29.705 189463 DEBUG nova.network.neutron [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:01:29 compute-0 podman[203941]: time="2025-12-02T17:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:01:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:01:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.152 189463 DEBUG nova.compute.manager [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-vif-unplugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.153 189463 DEBUG oslo_concurrency.lockutils [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.154 189463 DEBUG oslo_concurrency.lockutils [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.154 189463 DEBUG oslo_concurrency.lockutils [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.155 189463 DEBUG nova.compute.manager [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] No waiting events found dispatching network-vif-unplugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.155 189463 DEBUG nova.compute.manager [req-1345a4e8-57a7-4e4d-ab7f-46fade92c44c req-08cb98ef-15fb-48da-ac27-4224068b0bd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-vif-unplugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:01:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:30.356 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:01:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:30.357 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:01:30 compute-0 nova_compute[189459]: 2025-12-02 17:01:30.364 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: ERROR   17:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: ERROR   17:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: ERROR   17:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: ERROR   17:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: ERROR   17:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:01:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:01:31 compute-0 nova_compute[189459]: 2025-12-02 17:01:31.940 189463 DEBUG nova.network.neutron [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:01:31 compute-0 nova_compute[189459]: 2025-12-02 17:01:31.962 189463 INFO nova.compute.manager [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Took 2.26 seconds to deallocate network for instance.#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.016 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.017 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.176 189463 DEBUG nova.compute.provider_tree [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.209 189463 DEBUG nova.scheduler.client.report [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.238 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.264 189463 INFO nova.scheduler.client.report [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Deleted allocations for instance 839e5006-8465-4d21-8287-0bba4f28a358#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.315 189463 DEBUG nova.compute.manager [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.315 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "839e5006-8465-4d21-8287-0bba4f28a358-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.316 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.316 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.317 189463 DEBUG nova.compute.manager [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] No waiting events found dispatching network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.317 189463 WARNING nova.compute.manager [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received unexpected event network-vif-plugged-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.318 189463 DEBUG nova.compute.manager [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Received event network-changed-14dc4429-05ef-4ac6-9fa4-500c0ce93c01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.318 189463 DEBUG nova.compute.manager [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Refreshing instance network info cache due to event network-changed-14dc4429-05ef-4ac6-9fa4-500c0ce93c01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.319 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.319 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.320 189463 DEBUG nova.network.neutron [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Refreshing network info cache for port 14dc4429-05ef-4ac6-9fa4-500c0ce93c01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.363 189463 DEBUG oslo_concurrency.lockutils [None req-c167df25-d0a9-400f-8779-9a780d6a0625 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "839e5006-8465-4d21-8287-0bba4f28a358" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.863 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:32 compute-0 nova_compute[189459]: 2025-12-02 17:01:32.991 189463 DEBUG nova.network.neutron [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:01:34 compute-0 nova_compute[189459]: 2025-12-02 17:01:34.071 189463 DEBUG nova.network.neutron [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:01:34 compute-0 nova_compute[189459]: 2025-12-02 17:01:34.093 189463 DEBUG oslo_concurrency.lockutils [req-d418b379-5a64-4c12-be73-e156e3b449b3 req-3f9b3eef-7475-4da0-8bc2-0ee2f586d7f3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-839e5006-8465-4d21-8287-0bba4f28a358" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:01:34 compute-0 nova_compute[189459]: 2025-12-02 17:01:34.611 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:36 compute-0 podman[244880]: 2025-12-02 17:01:36.328562094 +0000 UTC m=+0.123341668 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:01:36 compute-0 podman[244881]: 2025-12-02 17:01:36.335236863 +0000 UTC m=+0.126800841 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:01:37 compute-0 nova_compute[189459]: 2025-12-02 17:01:37.868 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:38 compute-0 podman[244920]: 2025-12-02 17:01:38.286132724 +0000 UTC m=+0.097921639 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:01:39 compute-0 nova_compute[189459]: 2025-12-02 17:01:39.614 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:40 compute-0 podman[244942]: 2025-12-02 17:01:40.292073758 +0000 UTC m=+0.114748299 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  2 17:01:40 compute-0 podman[244943]: 2025-12-02 17:01:40.296821605 +0000 UTC m=+0.115516499 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  2 17:01:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:01:40.361 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:01:42 compute-0 nova_compute[189459]: 2025-12-02 17:01:42.870 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:44 compute-0 nova_compute[189459]: 2025-12-02 17:01:44.582 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764694889.5798717, 839e5006-8465-4d21-8287-0bba4f28a358 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:01:44 compute-0 nova_compute[189459]: 2025-12-02 17:01:44.583 189463 INFO nova.compute.manager [-] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:01:44 compute-0 nova_compute[189459]: 2025-12-02 17:01:44.616 189463 DEBUG nova.compute.manager [None req-ce3946d6-9c60-4f25-b096-4c2a954736da - - - - - -] [instance: 839e5006-8465-4d21-8287-0bba4f28a358] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:01:44 compute-0 nova_compute[189459]: 2025-12-02 17:01:44.617 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:46 compute-0 podman[244982]: 2025-12-02 17:01:46.269890667 +0000 UTC m=+0.088073666 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:01:46 compute-0 podman[244983]: 2025-12-02 17:01:46.307811841 +0000 UTC m=+0.122668231 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:01:46 compute-0 podman[244981]: 2025-12-02 17:01:46.332210533 +0000 UTC m=+0.148752088 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:01:47 compute-0 nova_compute[189459]: 2025-12-02 17:01:47.873 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:49 compute-0 nova_compute[189459]: 2025-12-02 17:01:49.619 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:52 compute-0 nova_compute[189459]: 2025-12-02 17:01:52.876 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:54 compute-0 nova_compute[189459]: 2025-12-02 17:01:54.622 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:57 compute-0 nova_compute[189459]: 2025-12-02 17:01:57.878 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:59 compute-0 podman[245051]: 2025-12-02 17:01:59.29559781 +0000 UTC m=+0.107683850 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec  2 17:01:59 compute-0 nova_compute[189459]: 2025-12-02 17:01:59.625 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:01:59 compute-0 podman[203941]: time="2025-12-02T17:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:01:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:01:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: ERROR   17:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: ERROR   17:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: ERROR   17:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: ERROR   17:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: ERROR   17:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:02:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:02:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:02:01.869 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:02:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:02:01.869 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:02:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:02:01.870 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:02:02 compute-0 nova_compute[189459]: 2025-12-02 17:02:02.880 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:04 compute-0 ovn_controller[97975]: 2025-12-02T17:02:04Z|00053|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec  2 17:02:04 compute-0 nova_compute[189459]: 2025-12-02 17:02:04.628 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:07 compute-0 podman[245074]: 2025-12-02 17:02:07.267110074 +0000 UTC m=+0.084575983 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 17:02:07 compute-0 podman[245073]: 2025-12-02 17:02:07.283919113 +0000 UTC m=+0.106211631 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec  2 17:02:07 compute-0 nova_compute[189459]: 2025-12-02 17:02:07.884 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:09 compute-0 podman[245112]: 2025-12-02 17:02:09.267783479 +0000 UTC m=+0.085954800 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:02:09 compute-0 nova_compute[189459]: 2025-12-02 17:02:09.632 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:11 compute-0 podman[245132]: 2025-12-02 17:02:11.288043685 +0000 UTC m=+0.100012655 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  2 17:02:11 compute-0 podman[245131]: 2025-12-02 17:02:11.32523113 +0000 UTC m=+0.146703464 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  2 17:02:12 compute-0 nova_compute[189459]: 2025-12-02 17:02:12.886 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:14 compute-0 nova_compute[189459]: 2025-12-02 17:02:14.636 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:17 compute-0 podman[245168]: 2025-12-02 17:02:17.319903872 +0000 UTC m=+0.115968690 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:02:17 compute-0 podman[245167]: 2025-12-02 17:02:17.33589981 +0000 UTC m=+0.142269994 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:02:17 compute-0 podman[245166]: 2025-12-02 17:02:17.363135858 +0000 UTC m=+0.174659989 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:02:17 compute-0 nova_compute[189459]: 2025-12-02 17:02:17.890 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:19 compute-0 nova_compute[189459]: 2025-12-02 17:02:19.475 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:19 compute-0 nova_compute[189459]: 2025-12-02 17:02:19.503 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:19 compute-0 nova_compute[189459]: 2025-12-02 17:02:19.503 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:19 compute-0 nova_compute[189459]: 2025-12-02 17:02:19.641 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:21 compute-0 nova_compute[189459]: 2025-12-02 17:02:21.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:21 compute-0 nova_compute[189459]: 2025-12-02 17:02:21.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:22 compute-0 nova_compute[189459]: 2025-12-02 17:02:22.893 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.442 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.442 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.443 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.541 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.619 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.620 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.706 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.708 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.792 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.795 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.873 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.881 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.940 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.941 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:23 compute-0 nova_compute[189459]: 2025-12-02 17:02:23.999 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.000 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.056 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.059 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.116 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.125 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.186 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.187 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.280 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.283 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.377 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.379 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.476 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:02:24 compute-0 nova_compute[189459]: 2025-12-02 17:02:24.644 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.068 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.069 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4781MB free_disk=72.15888977050781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.070 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.070 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.165 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.166 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.166 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.166 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.166 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.236 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.251 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.276 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:02:25 compute-0 nova_compute[189459]: 2025-12-02 17:02:25.276 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:02:26 compute-0 nova_compute[189459]: 2025-12-02 17:02:26.275 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:26 compute-0 nova_compute[189459]: 2025-12-02 17:02:26.276 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:02:26 compute-0 nova_compute[189459]: 2025-12-02 17:02:26.975 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:02:26 compute-0 nova_compute[189459]: 2025-12-02 17:02:26.976 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:02:26 compute-0 nova_compute[189459]: 2025-12-02 17:02:26.976 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:02:27 compute-0 nova_compute[189459]: 2025-12-02 17:02:27.897 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.310 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.369 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.370 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.371 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.372 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.372 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:02:28 compute-0 nova_compute[189459]: 2025-12-02 17:02:28.372 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:02:29 compute-0 nova_compute[189459]: 2025-12-02 17:02:29.647 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:29 compute-0 podman[203941]: time="2025-12-02T17:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:02:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:02:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 17:02:30 compute-0 podman[245277]: 2025-12-02 17:02:30.263267733 +0000 UTC m=+0.078638143 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: ERROR   17:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: ERROR   17:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: ERROR   17:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: ERROR   17:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: ERROR   17:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:02:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:02:32 compute-0 nova_compute[189459]: 2025-12-02 17:02:32.901 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:34 compute-0 nova_compute[189459]: 2025-12-02 17:02:34.649 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:37 compute-0 nova_compute[189459]: 2025-12-02 17:02:37.905 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:38 compute-0 podman[245298]: 2025-12-02 17:02:38.083172601 +0000 UTC m=+0.130382567 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  2 17:02:38 compute-0 podman[245299]: 2025-12-02 17:02:38.093507418 +0000 UTC m=+0.142272945 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:02:39 compute-0 nova_compute[189459]: 2025-12-02 17:02:39.652 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:40 compute-0 podman[245334]: 2025-12-02 17:02:40.266216908 +0000 UTC m=+0.089942736 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec  2 17:02:42 compute-0 podman[245355]: 2025-12-02 17:02:42.261915188 +0000 UTC m=+0.074064501 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:02:42 compute-0 podman[245354]: 2025-12-02 17:02:42.300673183 +0000 UTC m=+0.121175790 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, vcs-type=git, version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Dec  2 17:02:42 compute-0 nova_compute[189459]: 2025-12-02 17:02:42.907 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:44 compute-0 nova_compute[189459]: 2025-12-02 17:02:44.655 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:47 compute-0 nova_compute[189459]: 2025-12-02 17:02:47.911 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:48 compute-0 podman[245394]: 2025-12-02 17:02:48.274870017 +0000 UTC m=+0.088343643 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:02:48 compute-0 podman[245393]: 2025-12-02 17:02:48.286084207 +0000 UTC m=+0.104029892 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:02:48 compute-0 podman[245392]: 2025-12-02 17:02:48.319057169 +0000 UTC m=+0.131048685 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  2 17:02:49 compute-0 nova_compute[189459]: 2025-12-02 17:02:49.658 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:52 compute-0 nova_compute[189459]: 2025-12-02 17:02:52.914 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:54 compute-0 nova_compute[189459]: 2025-12-02 17:02:54.660 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:57 compute-0 nova_compute[189459]: 2025-12-02 17:02:57.917 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:59 compute-0 nova_compute[189459]: 2025-12-02 17:02:59.663 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:02:59 compute-0 podman[203941]: time="2025-12-02T17:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:02:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:02:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  2 17:03:01 compute-0 podman[245464]: 2025-12-02 17:03:01.298199781 +0000 UTC m=+0.116553977 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: ERROR   17:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: ERROR   17:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: ERROR   17:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: ERROR   17:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: ERROR   17:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:03:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:03:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:01.871 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:01.871 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:01.872 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:02 compute-0 nova_compute[189459]: 2025-12-02 17:03:02.920 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.051 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.051 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.060 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'name': 'vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.069 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '941718a9-628f-4f41-81e3-225760dc6a62', 'name': 'vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:03:03.069812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.077 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.085 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.091 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.093 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.093 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.093 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.094 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:03:03.092832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:03:03.095208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.138 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.139 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.139 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.175 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.176 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.176 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.208 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.208 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:03:03.210101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.242 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 42260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.278 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/cpu volume: 38240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.305 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/cpu volume: 36090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:03:03.306732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.401 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.402 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.403 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.481 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.482 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.482 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.555 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.556 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.556 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.557 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.558 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.558 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:03:03.557552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.558 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 412604943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.559 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 86706146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.559 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.latency volume: 66308231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.559 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 717183131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.559 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 81550079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.559 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 63467364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.561 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.561 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.561 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:03:03.560914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.562 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.562 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.562 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.562 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.562 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.563 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.564 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.565 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.565 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:03:03.564292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.565 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.565 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.566 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.566 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.566 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.567 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.568 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.568 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.568 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.568 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.569 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:03:03.567302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.569 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.569 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.570 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.571 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:03:03.570603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.571 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.571 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.571 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.572 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.572 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.572 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.573 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:03:03.573589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.574 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.574 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.574 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 1373521669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.575 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 12454002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.575 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.575 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 709154876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.575 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 8231189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.575 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.576 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.577 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.577 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.577 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:03:03.577007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.578 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.579 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.579 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.579 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.580 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.580 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.580 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.580 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.580 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.581 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:03:03.578748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:03:03.581848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:03:03.583520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:03:03.584439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.584 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:03:03.585742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.586 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:03:03.586723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.588 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.588 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:03:03.588005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.588 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.589 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:03:03.589459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:03:03.590908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.592 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:03:03.592289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:03:03.593899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.595 14 DEBUG ceilometer.compute.pollsters [-] c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.596 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:03:03.595325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:03:03.598 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:03:04 compute-0 nova_compute[189459]: 2025-12-02 17:03:04.666 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:07 compute-0 nova_compute[189459]: 2025-12-02 17:03:07.928 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:08 compute-0 podman[245486]: 2025-12-02 17:03:08.279283252 +0000 UTC m=+0.096299586 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 17:03:08 compute-0 podman[245485]: 2025-12-02 17:03:08.286959557 +0000 UTC m=+0.097091587 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:03:09 compute-0 nova_compute[189459]: 2025-12-02 17:03:09.669 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:11 compute-0 podman[245522]: 2025-12-02 17:03:11.233465186 +0000 UTC m=+0.057235581 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 17:03:12 compute-0 nova_compute[189459]: 2025-12-02 17:03:12.927 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:13 compute-0 podman[245543]: 2025-12-02 17:03:13.281084883 +0000 UTC m=+0.086473073 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  2 17:03:13 compute-0 podman[245542]: 2025-12-02 17:03:13.281879654 +0000 UTC m=+0.105332737 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  2 17:03:14 compute-0 nova_compute[189459]: 2025-12-02 17:03:14.671 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:17 compute-0 nova_compute[189459]: 2025-12-02 17:03:17.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:17 compute-0 nova_compute[189459]: 2025-12-02 17:03:17.930 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:19 compute-0 podman[245581]: 2025-12-02 17:03:19.256583206 +0000 UTC m=+0.067436554 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:03:19 compute-0 podman[245580]: 2025-12-02 17:03:19.276790297 +0000 UTC m=+0.092067043 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:03:19 compute-0 podman[245579]: 2025-12-02 17:03:19.320555167 +0000 UTC m=+0.130376057 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:03:19 compute-0 nova_compute[189459]: 2025-12-02 17:03:19.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:19 compute-0 nova_compute[189459]: 2025-12-02 17:03:19.675 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:22 compute-0 nova_compute[189459]: 2025-12-02 17:03:22.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:22 compute-0 nova_compute[189459]: 2025-12-02 17:03:22.933 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:23 compute-0 nova_compute[189459]: 2025-12-02 17:03:23.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.610 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.611 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.612 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.612 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:03:24 compute-0 nova_compute[189459]: 2025-12-02 17:03:24.679 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.788 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.807 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.808 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.808 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.809 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.840 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.841 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.841 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.842 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:03:25 compute-0 nova_compute[189459]: 2025-12-02 17:03:25.979 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.077 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.079 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.165 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.168 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.263 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.264 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.338 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.345 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.400 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.401 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.460 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.461 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.538 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.539 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.629 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.642 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.728 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.730 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.792 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.794 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.856 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.858 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:03:26 compute-0 nova_compute[189459]: 2025-12-02 17:03:26.969 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.286 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.287 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.288 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.288 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.289 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.290 189463 INFO nova.compute.manager [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Terminating instance#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.291 189463 DEBUG nova.compute.manager [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:03:27 compute-0 kernel: tap2b3cee36-c2 (unregistering): left promiscuous mode
Dec  2 17:03:27 compute-0 NetworkManager[56503]: <info>  [1764695007.3351] device (tap2b3cee36-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:03:27 compute-0 ovn_controller[97975]: 2025-12-02T17:03:27Z|00054|binding|INFO|Releasing lport 2b3cee36-c20f-440c-8026-d43bec6b580a from this chassis (sb_readonly=0)
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.346 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 ovn_controller[97975]: 2025-12-02T17:03:27Z|00055|binding|INFO|Setting lport 2b3cee36-c20f-440c-8026-d43bec6b580a down in Southbound
Dec  2 17:03:27 compute-0 ovn_controller[97975]: 2025-12-02T17:03:27Z|00056|binding|INFO|Removing iface tap2b3cee36-c2 ovn-installed in OVS
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.351 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.356 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:65:a3 192.168.0.244'], port_security=['fa:16:3e:1b:65:a3 192.168.0.244'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-port-etjiathcc44u', 'neutron:cidrs': '192.168.0.244/24', 'neutron:device_id': 'c3d793a6-79d5-4b91-ac80-9ac02a5d36ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-port-etjiathcc44u', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=2b3cee36-c20f-440c-8026-d43bec6b580a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.357 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 2b3cee36-c20f-440c-8026-d43bec6b580a in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d unbound from our chassis#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.359 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.363 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.375 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4da88c-3ec8-4ac1-9349-6e7bbd88e6b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  2 17:03:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 33.319s CPU time.
Dec  2 17:03:27 compute-0 systemd-machined[155878]: Machine qemu-3-instance-00000003 terminated.
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.406 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[948ff7fd-af73-4bcb-bffd-62546c7a536c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.409 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[fc09d49d-aab8-4332-8dad-0cd96d596f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.436 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[e94ca28a-d868-4730-b8e5-274f6e658fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.456 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[1d167ef5-2a3b-4232-a3a3-03e8bb30822e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 44219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245699, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.471 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea6a509-f661-420f-a63b-3f8bd66ef7b6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245700, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245700, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.474 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.475 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.477 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.478 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=72.15892791748047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.478 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.478 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.481 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.481 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.482 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.482 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:03:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:27.482 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.520 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.573 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.574 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.574 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.574 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.574 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.580 189463 INFO nova.virt.libvirt.driver [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance destroyed successfully.#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.580 189463 DEBUG nova.objects.instance [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'resources' on Instance uuid c3d793a6-79d5-4b91-ac80-9ac02a5d36ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.598 189463 DEBUG nova.virt.libvirt.vif [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T16:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-xglfaxo5mefa-wlt7peozsxvn-vnf-rucv727xl4dm',id=3,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T16:55:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-2wdfa0ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T16:55:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTU5MzU2ODA5NTcwNTkyMTA4OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  2 17:03:27 compute-0 nova_compute[189459]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTU5MzU2ODA5NTcwNTkyMTA4OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTE1OTM1NjgwOTU3MDU5MjEwODk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xNTkzNTY4MDk1NzA1OTIxMDg5PT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=c3d793a6-79d5-4b91-ac80-9ac02a5d36ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.599 189463 DEBUG nova.network.os_vif_util [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "2b3cee36-c20f-440c-8026-d43bec6b580a", "address": "fa:16:3e:1b:65:a3", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cee36-c2", "ovs_interfaceid": "2b3cee36-c20f-440c-8026-d43bec6b580a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.600 189463 DEBUG nova.network.os_vif_util [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.600 189463 DEBUG os_vif [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.602 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.603 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b3cee36-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.604 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.606 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.610 189463 INFO os_vif [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:65:a3,bridge_name='br-int',has_traffic_filtering=True,id=2b3cee36-c20f-440c-8026-d43bec6b580a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2b3cee36-c2')#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.611 189463 INFO nova.virt.libvirt.driver [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Deleting instance files /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce_del#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.611 189463 INFO nova.virt.libvirt.driver [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Deletion of /var/lib/nova/instances/c3d793a6-79d5-4b91-ac80-9ac02a5d36ce_del complete#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.666 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.701 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.714 189463 INFO nova.compute.manager [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.715 189463 DEBUG oslo.service.loopingcall [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.715 189463 DEBUG nova.compute.manager [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.716 189463 DEBUG nova.network.neutron [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.724 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.725 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:27 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 17:03:27.598 189463 DEBUG nova.virt.libvirt.vif [None req-737bd31b-d2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.910 189463 DEBUG nova.compute.manager [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-vif-unplugged-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.912 189463 DEBUG oslo_concurrency.lockutils [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.913 189463 DEBUG oslo_concurrency.lockutils [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.914 189463 DEBUG oslo_concurrency.lockutils [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.914 189463 DEBUG nova.compute.manager [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] No waiting events found dispatching network-vif-unplugged-2b3cee36-c20f-440c-8026-d43bec6b580a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.915 189463 DEBUG nova.compute.manager [req-e562361c-c679-4192-8778-ffe15b55ba39 req-c57e09bb-61f3-4016-a545-b97ad9016220 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-vif-unplugged-2b3cee36-c20f-440c-8026-d43bec6b580a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:03:27 compute-0 nova_compute[189459]: 2025-12-02 17:03:27.936 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:28.010 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:03:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:28.011 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.017 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.326 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.742 189463 DEBUG nova.network.neutron [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.762 189463 INFO nova.compute.manager [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Took 1.05 seconds to deallocate network for instance.#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.807 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.807 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.912 189463 DEBUG nova.compute.provider_tree [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.932 189463 DEBUG nova.scheduler.client.report [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.959 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:28 compute-0 nova_compute[189459]: 2025-12-02 17:03:28.990 189463 INFO nova.scheduler.client.report [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Deleted allocations for instance c3d793a6-79d5-4b91-ac80-9ac02a5d36ce#033[00m
Dec  2 17:03:29 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.046 189463 DEBUG oslo_concurrency.lockutils [None req-737bd31b-d25a-4b22-b203-cfbd84ad0332 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:29 compute-0 podman[203941]: time="2025-12-02T17:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:03:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:03:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  2 17:03:29 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.995 189463 DEBUG nova.compute.manager [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:03:29 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.995 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:03:29 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.997 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:03:29 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.997 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c3d793a6-79d5-4b91-ac80-9ac02a5d36ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.998 189463 DEBUG nova.compute.manager [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] No waiting events found dispatching network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.998 189463 WARNING nova.compute.manager [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received unexpected event network-vif-plugged-2b3cee36-c20f-440c-8026-d43bec6b580a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:29.999 189463 DEBUG nova.compute.manager [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Received event network-changed-2b3cee36-c20f-440c-8026-d43bec6b580a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:30.000 189463 DEBUG nova.compute.manager [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Refreshing instance network info cache due to event network-changed-2b3cee36-c20f-440c-8026-d43bec6b580a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:30.000 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:30.001 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:30.001 189463 DEBUG nova.network.neutron [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Refreshing network info cache for port 2b3cee36-c20f-440c-8026-d43bec6b580a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:03:30 compute-0 nova_compute[189459]: 2025-12-02 17:03:30.186 189463 DEBUG nova.network.neutron [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:03:31 compute-0 nova_compute[189459]: 2025-12-02 17:03:31.222 189463 DEBUG nova.network.neutron [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  2 17:03:31 compute-0 nova_compute[189459]: 2025-12-02 17:03:31.223 189463 DEBUG oslo_concurrency.lockutils [req-4b052a97-1256-4a0a-bb4f-67403ccf13f2 req-902adb74-8f03-4a2c-9a07-7bfb199dbf47 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-c3d793a6-79d5-4b91-ac80-9ac02a5d36ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: ERROR   17:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: ERROR   17:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: ERROR   17:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: ERROR   17:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: ERROR   17:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:03:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:03:32 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:03:32.016 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:03:32 compute-0 podman[245722]: 2025-12-02 17:03:32.302201945 +0000 UTC m=+0.113449124 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Dec  2 17:03:32 compute-0 nova_compute[189459]: 2025-12-02 17:03:32.606 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:32 compute-0 nova_compute[189459]: 2025-12-02 17:03:32.940 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:37 compute-0 nova_compute[189459]: 2025-12-02 17:03:37.608 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:37 compute-0 nova_compute[189459]: 2025-12-02 17:03:37.944 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:39 compute-0 podman[245744]: 2025-12-02 17:03:39.272674867 +0000 UTC m=+0.088865217 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Dec  2 17:03:39 compute-0 podman[245745]: 2025-12-02 17:03:39.287036391 +0000 UTC m=+0.096924272 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Dec  2 17:03:42 compute-0 podman[245781]: 2025-12-02 17:03:42.298946909 +0000 UTC m=+0.115055547 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 17:03:42 compute-0 nova_compute[189459]: 2025-12-02 17:03:42.572 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695007.5704901, c3d793a6-79d5-4b91-ac80-9ac02a5d36ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:03:42 compute-0 nova_compute[189459]: 2025-12-02 17:03:42.572 189463 INFO nova.compute.manager [-] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:03:42 compute-0 nova_compute[189459]: 2025-12-02 17:03:42.599 189463 DEBUG nova.compute.manager [None req-e9c07462-3dd1-4f28-aeb9-50822b604bd8 - - - - - -] [instance: c3d793a6-79d5-4b91-ac80-9ac02a5d36ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:03:42 compute-0 nova_compute[189459]: 2025-12-02 17:03:42.610 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:42 compute-0 nova_compute[189459]: 2025-12-02 17:03:42.947 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:44 compute-0 podman[245803]: 2025-12-02 17:03:44.244841035 +0000 UTC m=+0.072498139 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm)
Dec  2 17:03:44 compute-0 podman[245804]: 2025-12-02 17:03:44.284183997 +0000 UTC m=+0.101091264 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:03:47 compute-0 nova_compute[189459]: 2025-12-02 17:03:47.614 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:47 compute-0 nova_compute[189459]: 2025-12-02 17:03:47.948 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:50 compute-0 podman[245843]: 2025-12-02 17:03:50.24646378 +0000 UTC m=+0.071142643 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:03:50 compute-0 podman[245844]: 2025-12-02 17:03:50.306204556 +0000 UTC m=+0.113618788 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:03:50 compute-0 podman[245842]: 2025-12-02 17:03:50.336544087 +0000 UTC m=+0.151296565 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  2 17:03:52 compute-0 nova_compute[189459]: 2025-12-02 17:03:52.618 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:52 compute-0 nova_compute[189459]: 2025-12-02 17:03:52.950 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:53 compute-0 systemd-logind[790]: New session 29 of user zuul.
Dec  2 17:03:53 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec  2 17:03:54 compute-0 python3[246090]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 17:03:57 compute-0 nova_compute[189459]: 2025-12-02 17:03:57.621 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:57 compute-0 nova_compute[189459]: 2025-12-02 17:03:57.952 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:03:59 compute-0 podman[203941]: time="2025-12-02T17:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:03:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:03:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4773 "" "Go-http-client/1.1"
Dec  2 17:04:00 compute-0 ovn_controller[97975]: 2025-12-02T17:04:00Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: ERROR   17:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: ERROR   17:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: ERROR   17:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: ERROR   17:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: ERROR   17:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:04:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:04:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:04:01.872 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:04:01.874 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:04:01.875 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:02 compute-0 nova_compute[189459]: 2025-12-02 17:04:02.624 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:02 compute-0 nova_compute[189459]: 2025-12-02 17:04:02.954 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:03 compute-0 podman[246128]: 2025-12-02 17:04:03.298524713 +0000 UTC m=+0.117317848 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 17:04:07 compute-0 nova_compute[189459]: 2025-12-02 17:04:07.628 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:07 compute-0 nova_compute[189459]: 2025-12-02 17:04:07.957 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:10 compute-0 podman[246149]: 2025-12-02 17:04:10.346739048 +0000 UTC m=+0.156041473 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 17:04:10 compute-0 podman[246148]: 2025-12-02 17:04:10.345276849 +0000 UTC m=+0.160403959 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.751 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.752 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.764 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.818 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.819 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.828 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.828 189463 INFO nova.compute.claims [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.969 189463 DEBUG nova.compute.provider_tree [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:04:10 compute-0 nova_compute[189459]: 2025-12-02 17:04:10.994 189463 DEBUG nova.scheduler.client.report [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.019 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.020 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.077 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.099 189463 INFO nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.145 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.234 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.235 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.235 189463 INFO nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Creating image(s)#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.236 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.236 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.237 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.237 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:11 compute-0 nova_compute[189459]: 2025-12-02 17:04:11.237 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.367 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.452 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.part --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.454 189463 DEBUG nova.virt.images [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] 56b677a1-b677-49ef-8ccd-c107e87e1c29 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.455 189463 DEBUG nova.privsep.utils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.456 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.part /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.633 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.657 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.part /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.663 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.724 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.726 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.749 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.813 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.815 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.815 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.827 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.885 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.886 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6,backing_fmt=raw /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.932 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6,backing_fmt=raw /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.933 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "a2d15f7c2922ae6c8da2b52b57bb19145907dde6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.933 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:12 compute-0 nova_compute[189459]: 2025-12-02 17:04:12.960 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.013 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.014 189463 DEBUG nova.virt.disk.api [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Checking if we can resize image /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.015 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.074 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.076 189463 DEBUG nova.virt.disk.api [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Cannot resize image /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.077 189463 DEBUG nova.objects.instance [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'migration_context' on Instance uuid 9aa8ce34-9c18-485e-b2f5-527af73d8462 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.100 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.101 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.102 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.119 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.203 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.204 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.205 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.223 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:13 compute-0 podman[246212]: 2025-12-02 17:04:13.280698663 +0000 UTC m=+0.102494972 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.297 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.299 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.366 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 1073741824" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.368 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.368 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.454 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.455 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.456 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Ensure instance console log exists: /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.456 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.457 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.458 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.460 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T17:03:59Z,direct_url=<?>,disk_format='qcow2',id=56b677a1-b677-49ef-8ccd-c107e87e1c29,min_disk=0,min_ram=0,name='fvt_testing_image',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T17:04:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '56b677a1-b677-49ef-8ccd-c107e87e1c29'}], 'ephemerals': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 1, 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.469 189463 WARNING nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.477 189463 DEBUG nova.virt.libvirt.host [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.477 189463 DEBUG nova.virt.libvirt.host [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.482 189463 DEBUG nova.virt.libvirt.host [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.483 189463 DEBUG nova.virt.libvirt.host [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.483 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.484 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:04:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9503eaf3-89e6-4736-8db7-93e9ea10323c',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-02T17:03:59Z,direct_url=<?>,disk_format='qcow2',id=56b677a1-b677-49ef-8ccd-c107e87e1c29,min_disk=0,min_ram=0,name='fvt_testing_image',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-02T17:04:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.484 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.485 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.485 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.485 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.486 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.486 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.486 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.487 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.487 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.487 189463 DEBUG nova.virt.hardware [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.493 189463 DEBUG nova.objects.instance [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9aa8ce34-9c18-485e-b2f5-527af73d8462 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.508 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <uuid>9aa8ce34-9c18-485e-b2f5-527af73d8462</uuid>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <name>instance-00000005</name>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <memory>524288</memory>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:name>fvt_testing_server</nova:name>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:04:13</nova:creationTime>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:flavor name="fvt_testing_flavor">
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:memory>512</nova:memory>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:ephemeral>1</nova:ephemeral>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:user uuid="91c12bcb1ad14b95b1bdedf7527f1adf">admin</nova:user>
Dec  2 17:04:13 compute-0 nova_compute[189459]:        <nova:project uuid="2f96d47197fa40f2a7126bf626847d74">admin</nova:project>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="56b677a1-b677-49ef-8ccd-c107e87e1c29"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <nova:ports/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <system>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="serial">9aa8ce34-9c18-485e-b2f5-527af73d8462</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="uuid">9aa8ce34-9c18-485e-b2f5-527af73d8462</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </system>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <os>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </os>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <features>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </features>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <target dev="vdb" bus="virtio"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.config"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/console.log" append="off"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <video>
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </video>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:04:13 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:04:13 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:04:13 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:04:13 compute-0 nova_compute[189459]: </domain>
Dec  2 17:04:13 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.600 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.601 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.602 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:04:13 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.603 189463 INFO nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Using config drive#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:13.999 189463 INFO nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Creating config drive at /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.config#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.006 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1nwuvye execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.135 189463 DEBUG oslo_concurrency.processutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1nwuvye" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:14 compute-0 systemd-machined[155878]: New machine qemu-5-instance-00000005.
Dec  2 17:04:14 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  2 17:04:14 compute-0 podman[246258]: 2025-12-02 17:04:14.390741042 +0000 UTC m=+0.095825983 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public)
Dec  2 17:04:14 compute-0 podman[246264]: 2025-12-02 17:04:14.397661397 +0000 UTC m=+0.062784570 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.650 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695054.6496778, 9aa8ce34-9c18-485e-b2f5-527af73d8462 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.654 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.658 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.659 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.666 189463 INFO nova.virt.libvirt.driver [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Instance spawned successfully.#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.667 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.681 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.697 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.705 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.705 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.706 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.706 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.707 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.707 189463 DEBUG nova.virt.libvirt.driver [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.747 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.747 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695054.652753, 9aa8ce34-9c18-485e-b2f5-527af73d8462 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.748 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] VM Started (Lifecycle Event)#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.791 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.799 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.818 189463 INFO nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Took 3.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.819 189463 DEBUG nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.834 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.885 189463 INFO nova.compute.manager [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Took 4.09 seconds to build instance.#033[00m
Dec  2 17:04:14 compute-0 nova_compute[189459]: 2025-12-02 17:04:14.903 189463 DEBUG oslo_concurrency.lockutils [None req-11dc668e-7212-49dd-855a-91ecb20e024e 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:16 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  2 17:04:16 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  2 17:04:17 compute-0 nova_compute[189459]: 2025-12-02 17:04:17.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:17 compute-0 nova_compute[189459]: 2025-12-02 17:04:17.639 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:17 compute-0 nova_compute[189459]: 2025-12-02 17:04:17.964 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:21 compute-0 podman[246329]: 2025-12-02 17:04:21.276983785 +0000 UTC m=+0.078128810 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:04:21 compute-0 podman[246328]: 2025-12-02 17:04:21.282086542 +0000 UTC m=+0.090034238 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:04:21 compute-0 podman[246327]: 2025-12-02 17:04:21.363964731 +0000 UTC m=+0.181123434 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:04:21 compute-0 nova_compute[189459]: 2025-12-02 17:04:21.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:22 compute-0 nova_compute[189459]: 2025-12-02 17:04:22.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:22 compute-0 nova_compute[189459]: 2025-12-02 17:04:22.642 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:22 compute-0 nova_compute[189459]: 2025-12-02 17:04:22.967 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:23 compute-0 nova_compute[189459]: 2025-12-02 17:04:23.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:24 compute-0 nova_compute[189459]: 2025-12-02 17:04:24.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:25 compute-0 nova_compute[189459]: 2025-12-02 17:04:25.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:25 compute-0 nova_compute[189459]: 2025-12-02 17:04:25.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:04:26 compute-0 nova_compute[189459]: 2025-12-02 17:04:26.014 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:04:26 compute-0 nova_compute[189459]: 2025-12-02 17:04:26.015 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:04:26 compute-0 nova_compute[189459]: 2025-12-02 17:04:26.015 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.387 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.405 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.406 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.433 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.434 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.434 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.435 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.539 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.636 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.637 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.656 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.711 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.713 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.780 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.781 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.851 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.862 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.953 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.955 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:27 compute-0 nova_compute[189459]: 2025-12-02 17:04:27.979 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.026 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.027 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.116 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.118 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.198 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.211 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.291 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.293 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.366 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.368 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.434 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.440 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.537 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.927 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.929 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4790MB free_disk=72.15279006958008GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.930 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:28 compute-0 nova_compute[189459]: 2025-12-02 17:04:28.931 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.115 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.116 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.116 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 9aa8ce34-9c18-485e-b2f5-527af73d8462 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.117 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.118 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.468 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.487 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.488 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.489 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "9aa8ce34-9c18-485e-b2f5-527af73d8462-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.489 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.490 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.491 189463 INFO nova.compute.manager [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Terminating instance#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.493 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "refresh_cache-9aa8ce34-9c18-485e-b2f5-527af73d8462" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.493 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquired lock "refresh_cache-9aa8ce34-9c18-485e-b2f5-527af73d8462" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.494 189463 DEBUG nova.network.neutron [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.496 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.519 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:04:29 compute-0 nova_compute[189459]: 2025-12-02 17:04:29.520 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:29 compute-0 podman[203941]: time="2025-12-02T17:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:04:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:04:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.001 189463 DEBUG nova.network.neutron [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.292 189463 DEBUG nova.network.neutron [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.309 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Releasing lock "refresh_cache-9aa8ce34-9c18-485e-b2f5-527af73d8462" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.310 189463 DEBUG nova.compute.manager [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:04:30 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  2 17:04:30 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.170s CPU time.
Dec  2 17:04:30 compute-0 systemd-machined[155878]: Machine qemu-5-instance-00000005 terminated.
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.619 189463 INFO nova.virt.libvirt.driver [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Instance destroyed successfully.#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.620 189463 DEBUG nova.objects.instance [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'resources' on Instance uuid 9aa8ce34-9c18-485e-b2f5-527af73d8462 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.635 189463 INFO nova.virt.libvirt.driver [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Deleting instance files /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462_del#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.637 189463 INFO nova.virt.libvirt.driver [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Deletion of /var/lib/nova/instances/9aa8ce34-9c18-485e-b2f5-527af73d8462_del complete#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.691 189463 INFO nova.compute.manager [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.692 189463 DEBUG oslo.service.loopingcall [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.693 189463 DEBUG nova.compute.manager [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:04:30 compute-0 nova_compute[189459]: 2025-12-02 17:04:30.693 189463 DEBUG nova.network.neutron [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.002 189463 DEBUG nova.network.neutron [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.024 189463 DEBUG nova.network.neutron [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.048 189463 INFO nova.compute.manager [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Took 0.35 seconds to deallocate network for instance.#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.110 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.110 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.223 189463 DEBUG nova.compute.provider_tree [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.240 189463 DEBUG nova.scheduler.client.report [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.260 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.291 189463 INFO nova.scheduler.client.report [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Deleted allocations for instance 9aa8ce34-9c18-485e-b2f5-527af73d8462#033[00m
Dec  2 17:04:31 compute-0 nova_compute[189459]: 2025-12-02 17:04:31.363 189463 DEBUG oslo_concurrency.lockutils [None req-159a791d-7a27-44be-911b-4b46e203fe6a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "9aa8ce34-9c18-485e-b2f5-527af73d8462" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: ERROR   17:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: ERROR   17:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: ERROR   17:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: ERROR   17:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: ERROR   17:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:04:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:04:32 compute-0 nova_compute[189459]: 2025-12-02 17:04:32.526 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:04:32 compute-0 nova_compute[189459]: 2025-12-02 17:04:32.527 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:04:32 compute-0 nova_compute[189459]: 2025-12-02 17:04:32.659 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:32 compute-0 nova_compute[189459]: 2025-12-02 17:04:32.973 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:34 compute-0 podman[246444]: 2025-12-02 17:04:34.295525584 +0000 UTC m=+0.121682774 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41)
Dec  2 17:04:37 compute-0 nova_compute[189459]: 2025-12-02 17:04:37.662 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:37 compute-0 nova_compute[189459]: 2025-12-02 17:04:37.977 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:41 compute-0 podman[246466]: 2025-12-02 17:04:41.284746993 +0000 UTC m=+0.101976767 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:04:41 compute-0 podman[246465]: 2025-12-02 17:04:41.304692507 +0000 UTC m=+0.118405107 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  2 17:04:42 compute-0 nova_compute[189459]: 2025-12-02 17:04:42.666 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:42 compute-0 nova_compute[189459]: 2025-12-02 17:04:42.979 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:44 compute-0 podman[246502]: 2025-12-02 17:04:44.312274513 +0000 UTC m=+0.129338649 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:04:44 compute-0 podman[246522]: 2025-12-02 17:04:44.781674294 +0000 UTC m=+0.086155925 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-type=git, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, config_id=edpm, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:04:44 compute-0 podman[246523]: 2025-12-02 17:04:44.803473237 +0000 UTC m=+0.109139289 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:04:45 compute-0 nova_compute[189459]: 2025-12-02 17:04:45.614 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695070.6128948, 9aa8ce34-9c18-485e-b2f5-527af73d8462 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:04:45 compute-0 nova_compute[189459]: 2025-12-02 17:04:45.615 189463 INFO nova.compute.manager [-] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:04:45 compute-0 nova_compute[189459]: 2025-12-02 17:04:45.637 189463 DEBUG nova.compute.manager [None req-884a2404-59d7-4ef6-b805-64c347f38f59 - - - - - -] [instance: 9aa8ce34-9c18-485e-b2f5-527af73d8462] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:04:47 compute-0 nova_compute[189459]: 2025-12-02 17:04:47.671 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:47 compute-0 nova_compute[189459]: 2025-12-02 17:04:47.982 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:52 compute-0 podman[246563]: 2025-12-02 17:04:52.247632284 +0000 UTC m=+0.072163790 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:04:52 compute-0 podman[246562]: 2025-12-02 17:04:52.256508691 +0000 UTC m=+0.083883613 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:04:52 compute-0 podman[246561]: 2025-12-02 17:04:52.289682448 +0000 UTC m=+0.119543347 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:04:52 compute-0 nova_compute[189459]: 2025-12-02 17:04:52.675 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:52 compute-0 nova_compute[189459]: 2025-12-02 17:04:52.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:54 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec  2 17:04:54 compute-0 systemd[1]: session-29.scope: Consumed 1.236s CPU time.
Dec  2 17:04:54 compute-0 systemd-logind[790]: Session 29 logged out. Waiting for processes to exit.
Dec  2 17:04:54 compute-0 systemd-logind[790]: Removed session 29.
Dec  2 17:04:57 compute-0 nova_compute[189459]: 2025-12-02 17:04:57.678 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:57 compute-0 nova_compute[189459]: 2025-12-02 17:04:57.989 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:04:59 compute-0 podman[203941]: time="2025-12-02T17:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:04:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:04:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4773 "" "Go-http-client/1.1"
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: ERROR   17:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: ERROR   17:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: ERROR   17:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: ERROR   17:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: ERROR   17:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:05:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:05:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:05:01.874 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:05:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:05:01.875 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:05:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:05:01.876 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:05:02 compute-0 nova_compute[189459]: 2025-12-02 17:05:02.681 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:02 compute-0 nova_compute[189459]: 2025-12-02 17:05:02.992 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.051 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.051 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.061 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.066 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '941718a9-628f-4f41-81e3-225760dc6a62', 'name': 'vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:05:03.067188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.073 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.078 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.080 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.081 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:05:03.080682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:05:03.083619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.120 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.121 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.121 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.161 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.161 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.161 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:05:03.162848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.192 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 43960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.222 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/cpu volume: 37800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:05:03.223520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.330 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.331 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.332 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.456 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.456 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.457 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.459 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.459 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.459 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.460 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 717183131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.460 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 81550079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.460 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 63467364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.462 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.463 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.463 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.464 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:05:03.458826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.464 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:05:03.462460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.466 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.466 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.467 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.467 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.468 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.468 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:05:03.466444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.470 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.470 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.471 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.471 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.472 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.472 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:05:03.470144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.474 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:05:03.474119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.475 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.475 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.475 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.476 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.476 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.478 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:05:03.477986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.478 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.479 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.479 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 709154876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.479 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 8231189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.480 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.481 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.481 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.482 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:05:03.481515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.483 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:05:03.483678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.484 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.484 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.485 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.485 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.485 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.487 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.487 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:05:03.487231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:05:03.489807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.491 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.492 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:05:03.491508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:05:03.493209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:05:03.494449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.494 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.496 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.496 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:05:03.495891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:05:03.497311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.497 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.499 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.499 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.500 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/memory.usage volume: 48.89453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:05:03.498947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:05:03.500388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:05:03.502078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:05:03.503529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.504 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.505 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.506 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:05:03.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:05:05 compute-0 podman[246632]: 2025-12-02 17:05:05.247153398 +0000 UTC m=+0.082360263 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:05:07 compute-0 systemd-logind[790]: New session 30 of user zuul.
Dec  2 17:05:07 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec  2 17:05:07 compute-0 nova_compute[189459]: 2025-12-02 17:05:07.684 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:07 compute-0 nova_compute[189459]: 2025-12-02 17:05:07.995 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:08 compute-0 python3[246832]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 17:05:12 compute-0 podman[246871]: 2025-12-02 17:05:12.257495579 +0000 UTC m=+0.079586349 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  2 17:05:12 compute-0 podman[246870]: 2025-12-02 17:05:12.28293384 +0000 UTC m=+0.109844609 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 17:05:12 compute-0 nova_compute[189459]: 2025-12-02 17:05:12.687 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:12 compute-0 nova_compute[189459]: 2025-12-02 17:05:12.997 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:14 compute-0 podman[246908]: 2025-12-02 17:05:14.808041653 +0000 UTC m=+0.115172331 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 17:05:14 compute-0 podman[246927]: 2025-12-02 17:05:14.941145231 +0000 UTC m=+0.075432237 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 17:05:14 compute-0 podman[246926]: 2025-12-02 17:05:14.944320326 +0000 UTC m=+0.088738963 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:05:16 compute-0 python3[247138]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 17:05:17 compute-0 nova_compute[189459]: 2025-12-02 17:05:17.690 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:18 compute-0 nova_compute[189459]: 2025-12-02 17:05:18.000 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:19 compute-0 nova_compute[189459]: 2025-12-02 17:05:19.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:21 compute-0 nova_compute[189459]: 2025-12-02 17:05:21.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:21 compute-0 nova_compute[189459]: 2025-12-02 17:05:21.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:21 compute-0 nova_compute[189459]: 2025-12-02 17:05:21.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:05:21 compute-0 nova_compute[189459]: 2025-12-02 17:05:21.441 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:05:22 compute-0 nova_compute[189459]: 2025-12-02 17:05:22.440 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:22 compute-0 nova_compute[189459]: 2025-12-02 17:05:22.695 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:23 compute-0 nova_compute[189459]: 2025-12-02 17:05:23.003 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:23 compute-0 podman[247179]: 2025-12-02 17:05:23.285972976 +0000 UTC m=+0.086513124 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:05:23 compute-0 podman[247178]: 2025-12-02 17:05:23.295432779 +0000 UTC m=+0.106519119 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:05:23 compute-0 podman[247177]: 2025-12-02 17:05:23.316861502 +0000 UTC m=+0.131362303 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:05:24 compute-0 nova_compute[189459]: 2025-12-02 17:05:24.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:25 compute-0 nova_compute[189459]: 2025-12-02 17:05:25.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:25 compute-0 nova_compute[189459]: 2025-12-02 17:05:25.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:05:25 compute-0 nova_compute[189459]: 2025-12-02 17:05:25.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:05:26 compute-0 nova_compute[189459]: 2025-12-02 17:05:26.059 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:05:26 compute-0 nova_compute[189459]: 2025-12-02 17:05:26.059 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:05:26 compute-0 nova_compute[189459]: 2025-12-02 17:05:26.060 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:05:26 compute-0 nova_compute[189459]: 2025-12-02 17:05:26.060 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:05:26 compute-0 python3[247421]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.500 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.529 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.529 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.530 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.530 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.530 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.578 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.578 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.579 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.579 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.690 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.709 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.758 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.764 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.860 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.861 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.929 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:27 compute-0 nova_compute[189459]: 2025-12-02 17:05:27.931 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.006 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.026 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.036 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.123 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.127 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.195 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.196 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.295 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.296 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.376 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.782 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.784 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=72.15369033813477GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.784 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:05:28 compute-0 nova_compute[189459]: 2025-12-02 17:05:28.784 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.077 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.077 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.078 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.078 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.148 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.190 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.191 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.202 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.246 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.352 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.372 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.399 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.400 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:05:29 compute-0 nova_compute[189459]: 2025-12-02 17:05:29.401 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:29 compute-0 podman[203941]: time="2025-12-02T17:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:05:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:05:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: ERROR   17:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: ERROR   17:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: ERROR   17:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: ERROR   17:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: ERROR   17:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:05:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:05:32 compute-0 nova_compute[189459]: 2025-12-02 17:05:32.304 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:32 compute-0 nova_compute[189459]: 2025-12-02 17:05:32.305 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:05:32 compute-0 nova_compute[189459]: 2025-12-02 17:05:32.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:05:32 compute-0 nova_compute[189459]: 2025-12-02 17:05:32.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:05:32 compute-0 nova_compute[189459]: 2025-12-02 17:05:32.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:33 compute-0 nova_compute[189459]: 2025-12-02 17:05:33.008 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:36 compute-0 podman[247486]: 2025-12-02 17:05:36.324074687 +0000 UTC m=+0.142146102 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec  2 17:05:37 compute-0 nova_compute[189459]: 2025-12-02 17:05:37.719 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:38 compute-0 nova_compute[189459]: 2025-12-02 17:05:38.011 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:41 compute-0 python3[247682]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 17:05:42 compute-0 nova_compute[189459]: 2025-12-02 17:05:42.722 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:43 compute-0 nova_compute[189459]: 2025-12-02 17:05:43.014 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:43 compute-0 podman[247723]: 2025-12-02 17:05:43.310414437 +0000 UTC m=+0.125498757 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 17:05:43 compute-0 podman[247722]: 2025-12-02 17:05:43.327135494 +0000 UTC m=+0.149916420 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:05:45 compute-0 podman[247761]: 2025-12-02 17:05:45.267518094 +0000 UTC m=+0.082339953 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi)
Dec  2 17:05:45 compute-0 podman[247762]: 2025-12-02 17:05:45.295499522 +0000 UTC m=+0.101210917 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:05:45 compute-0 podman[247767]: 2025-12-02 17:05:45.29579923 +0000 UTC m=+0.082736263 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  2 17:05:47 compute-0 nova_compute[189459]: 2025-12-02 17:05:47.726 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:48 compute-0 nova_compute[189459]: 2025-12-02 17:05:48.016 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:52 compute-0 nova_compute[189459]: 2025-12-02 17:05:52.729 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:53 compute-0 nova_compute[189459]: 2025-12-02 17:05:53.017 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:54 compute-0 podman[247819]: 2025-12-02 17:05:54.251470863 +0000 UTC m=+0.080197615 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:05:54 compute-0 podman[247820]: 2025-12-02 17:05:54.279619356 +0000 UTC m=+0.094546259 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:05:54 compute-0 podman[247818]: 2025-12-02 17:05:54.37029567 +0000 UTC m=+0.192776865 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 17:05:57 compute-0 nova_compute[189459]: 2025-12-02 17:05:57.732 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:58 compute-0 nova_compute[189459]: 2025-12-02 17:05:58.021 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:05:59 compute-0 podman[203941]: time="2025-12-02T17:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:05:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:05:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: ERROR   17:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: ERROR   17:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: ERROR   17:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: ERROR   17:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: ERROR   17:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:06:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:06:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:06:01.875 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:06:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:06:01.876 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:06:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:06:01.877 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:06:02 compute-0 nova_compute[189459]: 2025-12-02 17:06:02.735 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:03 compute-0 nova_compute[189459]: 2025-12-02 17:06:03.023 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:07 compute-0 podman[247889]: 2025-12-02 17:06:07.285681716 +0000 UTC m=+0.103522229 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  2 17:06:07 compute-0 nova_compute[189459]: 2025-12-02 17:06:07.738 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:08 compute-0 nova_compute[189459]: 2025-12-02 17:06:08.025 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:12 compute-0 nova_compute[189459]: 2025-12-02 17:06:12.740 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:13 compute-0 nova_compute[189459]: 2025-12-02 17:06:13.027 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:14 compute-0 podman[247908]: 2025-12-02 17:06:14.284242165 +0000 UTC m=+0.105006248 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:06:14 compute-0 podman[247909]: 2025-12-02 17:06:14.342562474 +0000 UTC m=+0.147075893 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:06:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 17:06:16 compute-0 podman[247949]: 2025-12-02 17:06:16.27310768 +0000 UTC m=+0.085041484 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, distribution-scope=public, release=1214.1726694543)
Dec  2 17:06:16 compute-0 podman[247950]: 2025-12-02 17:06:16.277454277 +0000 UTC m=+0.087771478 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:06:16 compute-0 podman[247947]: 2025-12-02 17:06:16.301034907 +0000 UTC m=+0.118420427 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:06:17 compute-0 nova_compute[189459]: 2025-12-02 17:06:17.744 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:18 compute-0 nova_compute[189459]: 2025-12-02 17:06:18.030 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:20 compute-0 nova_compute[189459]: 2025-12-02 17:06:20.437 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:22 compute-0 nova_compute[189459]: 2025-12-02 17:06:22.747 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:23 compute-0 nova_compute[189459]: 2025-12-02 17:06:23.033 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:23 compute-0 nova_compute[189459]: 2025-12-02 17:06:23.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:23 compute-0 nova_compute[189459]: 2025-12-02 17:06:23.434 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:24 compute-0 nova_compute[189459]: 2025-12-02 17:06:24.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:25 compute-0 podman[248008]: 2025-12-02 17:06:25.272854571 +0000 UTC m=+0.093321196 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:06:25 compute-0 podman[248009]: 2025-12-02 17:06:25.301448006 +0000 UTC m=+0.106793687 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:06:25 compute-0 podman[248007]: 2025-12-02 17:06:25.372337151 +0000 UTC m=+0.188724037 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 17:06:25 compute-0 nova_compute[189459]: 2025-12-02 17:06:25.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:25 compute-0 nova_compute[189459]: 2025-12-02 17:06:25.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:25 compute-0 nova_compute[189459]: 2025-12-02 17:06:25.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:06:26 compute-0 nova_compute[189459]: 2025-12-02 17:06:26.094 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:06:26 compute-0 nova_compute[189459]: 2025-12-02 17:06:26.095 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:06:26 compute-0 nova_compute[189459]: 2025-12-02 17:06:26.095 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.470 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.498 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.499 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.499 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.525 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.526 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.526 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.527 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.620 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.710 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.712 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.750 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.777 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.779 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.838 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.840 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.903 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.913 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.981 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:27 compute-0 nova_compute[189459]: 2025-12-02 17:06:27.983 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.037 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.053 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.055 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.120 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.122 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.183 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.543 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.544 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4931MB free_disk=72.1537094116211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.545 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.545 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.611 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.611 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.612 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.612 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.681 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.697 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.699 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:06:28 compute-0 nova_compute[189459]: 2025-12-02 17:06:28.699 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:06:29 compute-0 nova_compute[189459]: 2025-12-02 17:06:29.609 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:29 compute-0 nova_compute[189459]: 2025-12-02 17:06:29.609 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:29 compute-0 podman[203941]: time="2025-12-02T17:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:06:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:06:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  2 17:06:30 compute-0 nova_compute[189459]: 2025-12-02 17:06:30.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:06:30 compute-0 nova_compute[189459]: 2025-12-02 17:06:30.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: ERROR   17:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: ERROR   17:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: ERROR   17:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: ERROR   17:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: ERROR   17:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:06:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:06:32 compute-0 nova_compute[189459]: 2025-12-02 17:06:32.754 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:33 compute-0 nova_compute[189459]: 2025-12-02 17:06:33.040 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:37 compute-0 nova_compute[189459]: 2025-12-02 17:06:37.758 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:38 compute-0 nova_compute[189459]: 2025-12-02 17:06:38.042 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:38 compute-0 podman[248101]: 2025-12-02 17:06:38.337022067 +0000 UTC m=+0.143204510 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 17:06:41 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec  2 17:06:41 compute-0 systemd[1]: session-30.scope: Consumed 4.662s CPU time.
Dec  2 17:06:41 compute-0 systemd-logind[790]: Session 30 logged out. Waiting for processes to exit.
Dec  2 17:06:41 compute-0 systemd-logind[790]: Removed session 30.
Dec  2 17:06:42 compute-0 nova_compute[189459]: 2025-12-02 17:06:42.761 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:43 compute-0 nova_compute[189459]: 2025-12-02 17:06:43.046 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:44 compute-0 podman[248121]: 2025-12-02 17:06:44.803083298 +0000 UTC m=+0.083091503 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 17:06:44 compute-0 podman[248120]: 2025-12-02 17:06:44.808622346 +0000 UTC m=+0.087623674 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec  2 17:06:47 compute-0 podman[248160]: 2025-12-02 17:06:47.269248615 +0000 UTC m=+0.076769083 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:06:47 compute-0 podman[248159]: 2025-12-02 17:06:47.288527841 +0000 UTC m=+0.105517052 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, release-0.7.12=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.buildah.version=1.29.0)
Dec  2 17:06:47 compute-0 podman[248158]: 2025-12-02 17:06:47.293339759 +0000 UTC m=+0.119669170 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec  2 17:06:47 compute-0 nova_compute[189459]: 2025-12-02 17:06:47.764 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:48 compute-0 nova_compute[189459]: 2025-12-02 17:06:48.048 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:52 compute-0 nova_compute[189459]: 2025-12-02 17:06:52.768 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:53 compute-0 nova_compute[189459]: 2025-12-02 17:06:53.053 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:56 compute-0 podman[248213]: 2025-12-02 17:06:56.310129419 +0000 UTC m=+0.111912833 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:06:56 compute-0 podman[248214]: 2025-12-02 17:06:56.325458599 +0000 UTC m=+0.119404163 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:06:56 compute-0 podman[248212]: 2025-12-02 17:06:56.352924103 +0000 UTC m=+0.161616182 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:06:57 compute-0 nova_compute[189459]: 2025-12-02 17:06:57.770 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:58 compute-0 nova_compute[189459]: 2025-12-02 17:06:58.055 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:06:59 compute-0 podman[203941]: time="2025-12-02T17:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:06:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:06:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: ERROR   17:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: ERROR   17:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: ERROR   17:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: ERROR   17:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: ERROR   17:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:07:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:07:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:01.876 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:01.878 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:01.879 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:02 compute-0 nova_compute[189459]: 2025-12-02 17:07:02.773 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.052 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.052 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 nova_compute[189459]: 2025-12-02 17:07:03.059 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8892b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.064 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'name': 'test_0', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '941718a9-628f-4f41-81e3-225760dc6a62', 'name': 'vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz', 'flavor': {'id': '8aba0aff-301c-4123-b0dc-aba3acd2a3ad', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5b0e8045-c81c-486a-86d2-bf0e0fd17a5a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2f96d47197fa40f2a7126bf626847d74', 'user_id': '91c12bcb1ad14b95b1bdedf7527f1adf', 'hostId': '037b8cfb042fb842736b11df137e48ba8fa9c9b539fc39f70ea46059', 'status': 'active', 'metadata': {'metering.server_group': 'a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:07:03.067695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.072 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.075 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.077 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.077 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.077 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:07:03.076889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:07:03.078327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.113 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.114 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.114 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 rsyslogd[236995]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 17:07:03 compute-0 rsyslogd[236995]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.153 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.154 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.154 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.155 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:07:03.156232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.183 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/cpu volume: 45640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.204 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/cpu volume: 39490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:07:03.205756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.273 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.273 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.273 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.351 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.352 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.352 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.353 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.354 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 433185196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.354 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 88307127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:07:03.354159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.355 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.latency volume: 53354006 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.355 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 717183131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.355 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 81550079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.356 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.latency volume: 63467364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:07:03.357644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.358 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.358 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.358 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.359 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.359 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.359 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:07:03.361118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.361 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.361 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.362 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.362 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.362 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.363 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.364 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.364 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.365 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.365 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.365 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.366 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:07:03.364106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.367 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.368 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:07:03.367707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.368 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.369 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.369 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.369 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.370 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.371 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 1962762677 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.371 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 9331229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:07:03.371095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.372 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.372 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 709154876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.372 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 8231189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.373 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.374 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:07:03.374292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.376 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.376 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.377 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:07:03.376079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.377 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.377 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.378 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.378 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.379 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.379 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:07:03.379139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:07:03.381075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:07:03.382085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:07:03.383406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:07:03.384406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.384 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:07:03.385674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.387 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.387 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.387 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:07:03.386894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.388 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.389 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/memory.usage volume: 48.89453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:07:03.388343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:07:03.389763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:07:03.391205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.391 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:07:03.392774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.393 14 DEBUG ceilometer.compute.pollsters [-] bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.393 14 DEBUG ceilometer.compute.pollsters [-] 941718a9-628f-4f41-81e3-225760dc6a62/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:07:03.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:07:07 compute-0 nova_compute[189459]: 2025-12-02 17:07:07.776 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:08 compute-0 nova_compute[189459]: 2025-12-02 17:07:08.062 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:09 compute-0 podman[248286]: 2025-12-02 17:07:09.29483582 +0000 UTC m=+0.113404473 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:07:12 compute-0 nova_compute[189459]: 2025-12-02 17:07:12.781 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:13 compute-0 nova_compute[189459]: 2025-12-02 17:07:13.064 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:15 compute-0 podman[248306]: 2025-12-02 17:07:15.293638063 +0000 UTC m=+0.114565354 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:07:15 compute-0 podman[248307]: 2025-12-02 17:07:15.303954339 +0000 UTC m=+0.117383460 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  2 17:07:17 compute-0 nova_compute[189459]: 2025-12-02 17:07:17.783 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:18 compute-0 nova_compute[189459]: 2025-12-02 17:07:18.067 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:18 compute-0 podman[248345]: 2025-12-02 17:07:18.257929448 +0000 UTC m=+0.081352426 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Dec  2 17:07:18 compute-0 podman[248346]: 2025-12-02 17:07:18.262136121 +0000 UTC m=+0.080531784 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 17:07:18 compute-0 podman[248344]: 2025-12-02 17:07:18.265306266 +0000 UTC m=+0.096196283 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  2 17:07:22 compute-0 nova_compute[189459]: 2025-12-02 17:07:22.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:22 compute-0 nova_compute[189459]: 2025-12-02 17:07:22.786 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:23 compute-0 nova_compute[189459]: 2025-12-02 17:07:23.069 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:23 compute-0 nova_compute[189459]: 2025-12-02 17:07:23.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:24 compute-0 nova_compute[189459]: 2025-12-02 17:07:24.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:26 compute-0 nova_compute[189459]: 2025-12-02 17:07:26.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:26 compute-0 nova_compute[189459]: 2025-12-02 17:07:26.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:26 compute-0 nova_compute[189459]: 2025-12-02 17:07:26.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:07:26 compute-0 nova_compute[189459]: 2025-12-02 17:07:26.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:07:27 compute-0 podman[248402]: 2025-12-02 17:07:27.27657972 +0000 UTC m=+0.089784532 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:07:27 compute-0 nova_compute[189459]: 2025-12-02 17:07:27.300 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:07:27 compute-0 nova_compute[189459]: 2025-12-02 17:07:27.300 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:07:27 compute-0 nova_compute[189459]: 2025-12-02 17:07:27.301 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:07:27 compute-0 nova_compute[189459]: 2025-12-02 17:07:27.301 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:07:27 compute-0 podman[248403]: 2025-12-02 17:07:27.304827225 +0000 UTC m=+0.104737731 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:07:27 compute-0 podman[248401]: 2025-12-02 17:07:27.325762935 +0000 UTC m=+0.143772125 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 17:07:27 compute-0 nova_compute[189459]: 2025-12-02 17:07:27.789 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:28 compute-0 nova_compute[189459]: 2025-12-02 17:07:28.072 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.340 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [{"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.358 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.359 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.360 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.361 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.402 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.403 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.404 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.404 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.487 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.576 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.576 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.652 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.654 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:29 compute-0 podman[203941]: time="2025-12-02T17:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:07:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.755 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.758 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.850 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.865 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.963 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:29 compute-0 nova_compute[189459]: 2025-12-02 17:07:29.964 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.054 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.056 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.117 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.119 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.182 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.629 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.631 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4896MB free_disk=72.15368270874023GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.631 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.631 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.725 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.725 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 941718a9-628f-4f41-81e3-225760dc6a62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.726 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.726 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.814 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.829 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.832 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:07:30 compute-0 nova_compute[189459]: 2025-12-02 17:07:30.833 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: ERROR   17:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: ERROR   17:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: ERROR   17:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: ERROR   17:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: ERROR   17:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:07:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:07:32 compute-0 nova_compute[189459]: 2025-12-02 17:07:32.793 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:32 compute-0 nova_compute[189459]: 2025-12-02 17:07:32.882 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:32 compute-0 nova_compute[189459]: 2025-12-02 17:07:32.883 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:07:32 compute-0 nova_compute[189459]: 2025-12-02 17:07:32.884 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:07:33 compute-0 nova_compute[189459]: 2025-12-02 17:07:33.076 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:37 compute-0 nova_compute[189459]: 2025-12-02 17:07:37.796 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:38 compute-0 nova_compute[189459]: 2025-12-02 17:07:38.082 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:40 compute-0 podman[248493]: 2025-12-02 17:07:40.278505297 +0000 UTC m=+0.104707854 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  2 17:07:42 compute-0 nova_compute[189459]: 2025-12-02 17:07:42.800 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:43 compute-0 nova_compute[189459]: 2025-12-02 17:07:43.085 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.495 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.496 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.496 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.497 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.497 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.498 189463 INFO nova.compute.manager [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Terminating instance#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.499 189463 DEBUG nova.compute.manager [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:07:45 compute-0 kernel: tapb511e990-3b (unregistering): left promiscuous mode
Dec  2 17:07:45 compute-0 NetworkManager[56503]: <info>  [1764695265.5455] device (tapb511e990-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.556 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 ovn_controller[97975]: 2025-12-02T17:07:45Z|00058|binding|INFO|Releasing lport b511e990-3b17-4177-96a7-40fc44f7937a from this chassis (sb_readonly=0)
Dec  2 17:07:45 compute-0 ovn_controller[97975]: 2025-12-02T17:07:45Z|00059|binding|INFO|Setting lport b511e990-3b17-4177-96a7-40fc44f7937a down in Southbound
Dec  2 17:07:45 compute-0 ovn_controller[97975]: 2025-12-02T17:07:45Z|00060|binding|INFO|Removing iface tapb511e990-3b ovn-installed in OVS
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.560 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.565 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:03:ce 192.168.0.90'], port_security=['fa:16:3e:3f:03:ce 192.168.0.90'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-lawun5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-port-iulc4swxkdk6', 'neutron:cidrs': '192.168.0.90/24', 'neutron:device_id': '941718a9-628f-4f41-81e3-225760dc6a62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-lawun5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-port-iulc4swxkdk6', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b511e990-3b17-4177-96a7-40fc44f7937a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.567 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b511e990-3b17-4177-96a7-40fc44f7937a in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d unbound from our chassis#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.568 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.586 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.594 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c4143c-28f8-4124-be34-6dce67ce4855]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  2 17:07:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 55.676s CPU time.
Dec  2 17:07:45 compute-0 systemd-machined[155878]: Machine qemu-4-instance-00000004 terminated.
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.629 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[bb90f83f-241f-432a-9fe1-c22c17e0e914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.631 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3e69ea-1312-43f3-9b57-423443d569b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.662 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e1717b-a6a2-4f59-9eb6-2aca6e9bd81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 podman[248518]: 2025-12-02 17:07:45.669647647 +0000 UTC m=+0.088950284 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.681 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32b98f-09fb-4d1b-8b2f-e4c33ddd5d32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de25f73-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:b4:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377188, 'reachable_time': 30821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248564, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 podman[248519]: 2025-12-02 17:07:45.689081835 +0000 UTC m=+0.105965898 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.697 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ddee96b2-022d-414a-b28a-45433aeac723]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377200, 'tstamp': 377200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248565, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0de25f73-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377202, 'tstamp': 377202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248565, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.699 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.700 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.708 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.708 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de25f73-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.708 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.708 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de25f73-f0, col_values=(('external_ids', {'iface-id': 'eee37dc5-79f7-4a26-b100-4f955e7030f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:07:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:45.709 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.807 189463 DEBUG nova.compute.manager [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-vif-unplugged-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.807 189463 DEBUG oslo_concurrency.lockutils [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.808 189463 DEBUG oslo_concurrency.lockutils [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.808 189463 DEBUG oslo_concurrency.lockutils [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.809 189463 DEBUG nova.compute.manager [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] No waiting events found dispatching network-vif-unplugged-b511e990-3b17-4177-96a7-40fc44f7937a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.809 189463 DEBUG nova.compute.manager [req-c211e7c9-5a46-414e-933b-116249a76c8d req-aa5c16c5-ce2a-4280-b86f-fd01a68b4102 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-vif-unplugged-b511e990-3b17-4177-96a7-40fc44f7937a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.818 189463 INFO nova.virt.libvirt.driver [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Instance destroyed successfully.#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.819 189463 DEBUG nova.objects.instance [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'resources' on Instance uuid 941718a9-628f-4f41-81e3-225760dc6a62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.834 189463 DEBUG nova.virt.libvirt.vif [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T16:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-5rqv7xv-q7o5otzrhl2q-gyohlqnxmqmy-vnf-6bj6m5iy57uz',id=4,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T16:57:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a03c9b84-1553-4b2d-92ef-bf6c5c3b2fea'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-lluuumkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T16:57:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  2 17:07:45 compute-0 nova_compute[189459]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTI4NTQ2NDI4MTcxODA2NTY0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyODU0NjQyODE3MTgwNjU2NDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01Mjg1NDY0MjgxNzE4MDY1NjQ0PT0tLQo=',user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=941718a9-628f-4f41-81e3-225760dc6a62,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.834 189463 DEBUG nova.network.os_vif_util [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.835 189463 DEBUG nova.network.os_vif_util [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.836 189463 DEBUG os_vif [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.837 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.838 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb511e990-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.839 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.841 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.844 189463 INFO os_vif [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=b511e990-3b17-4177-96a7-40fc44f7937a,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb511e990-3b')#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.844 189463 INFO nova.virt.libvirt.driver [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Deleting instance files /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62_del#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.845 189463 INFO nova.virt.libvirt.driver [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Deletion of /var/lib/nova/instances/941718a9-628f-4f41-81e3-225760dc6a62_del complete#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.916 189463 INFO nova.compute.manager [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.916 189463 DEBUG oslo.service.loopingcall [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.917 189463 DEBUG nova.compute.manager [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:07:45 compute-0 nova_compute[189459]: 2025-12-02 17:07:45.917 189463 DEBUG nova.network.neutron [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:07:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:46.031 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:07:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:46.032 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.033 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.085 189463 DEBUG nova.compute.manager [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-changed-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.085 189463 DEBUG nova.compute.manager [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Refreshing instance network info cache due to event network-changed-b511e990-3b17-4177-96a7-40fc44f7937a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.085 189463 DEBUG oslo_concurrency.lockutils [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.086 189463 DEBUG oslo_concurrency.lockutils [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:07:46 compute-0 nova_compute[189459]: 2025-12-02 17:07:46.086 189463 DEBUG nova.network.neutron [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Refreshing network info cache for port b511e990-3b17-4177-96a7-40fc44f7937a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:07:46 compute-0 rsyslogd[236995]: message too long (8192) with configured size 8096, begin of message is: 2025-12-02 17:07:45.834 189463 DEBUG nova.virt.libvirt.vif [None req-6569ba43-51 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.927 189463 DEBUG nova.network.neutron [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.934 189463 DEBUG nova.compute.manager [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.935 189463 DEBUG oslo_concurrency.lockutils [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "941718a9-628f-4f41-81e3-225760dc6a62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.936 189463 DEBUG oslo_concurrency.lockutils [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.936 189463 DEBUG oslo_concurrency.lockutils [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.936 189463 DEBUG nova.compute.manager [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] No waiting events found dispatching network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.937 189463 WARNING nova.compute.manager [req-d701d60d-bea8-486a-9ef5-fd1fdb8cefea req-937743d6-2845-4875-a55c-fc0db46b2994 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Received unexpected event network-vif-plugged-b511e990-3b17-4177-96a7-40fc44f7937a for instance with vm_state active and task_state deleting.#033[00m
Dec  2 17:07:47 compute-0 nova_compute[189459]: 2025-12-02 17:07:47.968 189463 INFO nova.compute.manager [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Took 2.05 seconds to deallocate network for instance.#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.036 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.037 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.088 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.157 189463 DEBUG nova.compute.provider_tree [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.261 189463 DEBUG nova.scheduler.client.report [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.288 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.317 189463 INFO nova.scheduler.client.report [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Deleted allocations for instance 941718a9-628f-4f41-81e3-225760dc6a62#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.369 189463 DEBUG nova.network.neutron [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updated VIF entry in instance network info cache for port b511e990-3b17-4177-96a7-40fc44f7937a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.370 189463 DEBUG nova.network.neutron [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Updating instance_info_cache with network_info: [{"id": "b511e990-3b17-4177-96a7-40fc44f7937a", "address": "fa:16:3e:3f:03:ce", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.90", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb511e990-3b", "ovs_interfaceid": "b511e990-3b17-4177-96a7-40fc44f7937a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.376 189463 DEBUG oslo_concurrency.lockutils [None req-6569ba43-514f-427b-a077-8d5846a86412 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "941718a9-628f-4f41-81e3-225760dc6a62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:07:48 compute-0 nova_compute[189459]: 2025-12-02 17:07:48.390 189463 DEBUG oslo_concurrency.lockutils [req-0fd190e2-edf6-4067-b55e-e940a898c70c req-063830f1-2387-42fd-8069-b14e395230ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-941718a9-628f-4f41-81e3-225760dc6a62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:07:49 compute-0 podman[248588]: 2025-12-02 17:07:49.275521107 +0000 UTC m=+0.093465625 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 17:07:49 compute-0 podman[248590]: 2025-12-02 17:07:49.291587736 +0000 UTC m=+0.088757039 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  2 17:07:49 compute-0 podman[248589]: 2025-12-02 17:07:49.297619336 +0000 UTC m=+0.109526563 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:07:50 compute-0 nova_compute[189459]: 2025-12-02 17:07:50.839 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:53 compute-0 nova_compute[189459]: 2025-12-02 17:07:53.091 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:55 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:07:55.035 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:07:55 compute-0 nova_compute[189459]: 2025-12-02 17:07:55.842 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:58 compute-0 nova_compute[189459]: 2025-12-02 17:07:58.095 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:07:58 compute-0 podman[248645]: 2025-12-02 17:07:58.262047371 +0000 UTC m=+0.087103765 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:07:58 compute-0 podman[248644]: 2025-12-02 17:07:58.303931619 +0000 UTC m=+0.123618469 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:07:58 compute-0 podman[248646]: 2025-12-02 17:07:58.309513438 +0000 UTC m=+0.121336448 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:07:59 compute-0 podman[203941]: time="2025-12-02T17:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:07:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:07:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  2 17:08:00 compute-0 nova_compute[189459]: 2025-12-02 17:08:00.813 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695265.8115826, 941718a9-628f-4f41-81e3-225760dc6a62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:08:00 compute-0 nova_compute[189459]: 2025-12-02 17:08:00.814 189463 INFO nova.compute.manager [-] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:08:00 compute-0 nova_compute[189459]: 2025-12-02 17:08:00.845 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:00 compute-0 nova_compute[189459]: 2025-12-02 17:08:00.847 189463 DEBUG nova.compute.manager [None req-83ecc149-38de-4310-ba35-4a7464074707 - - - - - -] [instance: 941718a9-628f-4f41-81e3-225760dc6a62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: ERROR   17:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: ERROR   17:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: ERROR   17:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: ERROR   17:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: ERROR   17:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:08:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:08:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:01.877 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:01.879 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:01.880 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.095 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.681 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.682 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.682 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.683 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.683 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.684 189463 INFO nova.compute.manager [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Terminating instance#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.685 189463 DEBUG nova.compute.manager [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:08:03 compute-0 kernel: tap88cefba1-ab (unregistering): left promiscuous mode
Dec  2 17:08:03 compute-0 NetworkManager[56503]: <info>  [1764695283.7231] device (tap88cefba1-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:08:03 compute-0 ovn_controller[97975]: 2025-12-02T17:08:03Z|00061|binding|INFO|Releasing lport 88cefba1-abc8-4573-900a-031390192acc from this chassis (sb_readonly=0)
Dec  2 17:08:03 compute-0 ovn_controller[97975]: 2025-12-02T17:08:03Z|00062|binding|INFO|Setting lport 88cefba1-abc8-4573-900a-031390192acc down in Southbound
Dec  2 17:08:03 compute-0 ovn_controller[97975]: 2025-12-02T17:08:03Z|00063|binding|INFO|Removing iface tap88cefba1-ab ovn-installed in OVS
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.737 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:03.744 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:16 192.168.0.223'], port_security=['fa:16:3e:a3:87:16 192.168.0.223'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.223/24', 'neutron:device_id': 'bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f96d47197fa40f2a7126bf626847d74', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2f578b8-ec3c-4fec-b92a-e88835200c37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5333905f-03bb-46a9-abe5-817b01617c1a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=88cefba1-abc8-4573-900a-031390192acc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:08:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:03.749 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 88cefba1-abc8-4573-900a-031390192acc in datapath 0de25f73-f1ea-4477-bf20-c9bdbb417b7d unbound from our chassis#033[00m
Dec  2 17:08:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:03.753 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de25f73-f1ea-4477-bf20-c9bdbb417b7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:08:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:03.755 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ad557dfe-fa51-46de-8c7c-979491398027]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:03.757 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d namespace which is not needed anymore#033[00m
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.775 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  2 17:08:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 58.903s CPU time.
Dec  2 17:08:03 compute-0 systemd-machined[155878]: Machine qemu-1-instance-00000001 terminated.
Dec  2 17:08:03 compute-0 nova_compute[189459]: 2025-12-02 17:08:03.915 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:03 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [NOTICE]   (240124) : haproxy version is 2.8.14-c23fe91
Dec  2 17:08:03 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [NOTICE]   (240124) : path to executable is /usr/sbin/haproxy
Dec  2 17:08:03 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [WARNING]  (240124) : Exiting Master process...
Dec  2 17:08:03 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [ALERT]    (240124) : Current worker (240126) exited with code 143 (Terminated)
Dec  2 17:08:03 compute-0 neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d[240120]: [WARNING]  (240124) : All workers exited. Exiting... (0)
Dec  2 17:08:03 compute-0 systemd[1]: libpod-e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70.scope: Deactivated successfully.
Dec  2 17:08:03 compute-0 podman[248741]: 2025-12-02 17:08:03.961171609 +0000 UTC m=+0.089177261 container died e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.002 189463 INFO nova.virt.libvirt.driver [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Instance destroyed successfully.#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.003 189463 DEBUG nova.objects.instance [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lazy-loading 'resources' on Instance uuid bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.024 189463 DEBUG nova.compute.manager [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-unplugged-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.025 189463 DEBUG oslo_concurrency.lockutils [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.025 189463 DEBUG oslo_concurrency.lockutils [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.025 189463 DEBUG oslo_concurrency.lockutils [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.026 189463 DEBUG nova.compute.manager [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] No waiting events found dispatching network-vif-unplugged-88cefba1-abc8-4573-900a-031390192acc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.026 189463 DEBUG nova.compute.manager [req-3372f404-1574-4bc2-9ace-74c5ae6558b4 req-7ff45d18-5b98-4d19-a9da-d470877d1065 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-unplugged-88cefba1-abc8-4573-900a-031390192acc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.032 189463 DEBUG nova.virt.libvirt.vif [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T16:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T16:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f96d47197fa40f2a7126bf626847d74',ramdisk_id='',reservation_id='r-4h695zkr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='5b0e8045-c81c-486a-86d2-bf0e0fd17a5a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T16:50:19Z,user_data=None,user_id='91c12bcb1ad14b95b1bdedf7527f1adf',uuid=bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.033 189463 DEBUG nova.network.os_vif_util [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converting VIF {"id": "88cefba1-abc8-4573-900a-031390192acc", "address": "fa:16:3e:a3:87:16", "network": {"id": "0de25f73-f1ea-4477-bf20-c9bdbb417b7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f96d47197fa40f2a7126bf626847d74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cefba1-ab", "ovs_interfaceid": "88cefba1-abc8-4573-900a-031390192acc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.034 189463 DEBUG nova.network.os_vif_util [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.034 189463 DEBUG os_vif [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.036 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.036 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88cefba1-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.040 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70-userdata-shm.mount: Deactivated successfully.
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.042 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.045 189463 INFO os_vif [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:16,bridge_name='br-int',has_traffic_filtering=True,id=88cefba1-abc8-4573-900a-031390192acc,network=Network(0de25f73-f1ea-4477-bf20-c9bdbb417b7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cefba1-ab')#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.046 189463 INFO nova.virt.libvirt.driver [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Deleting instance files /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a_del#033[00m
Dec  2 17:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-68808cb0c0ad86e442f82eeebd636a4d93f745b23fe002550bffff69807978eb-merged.mount: Deactivated successfully.
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.047 189463 INFO nova.virt.libvirt.driver [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Deletion of /var/lib/nova/instances/bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a_del complete#033[00m
Dec  2 17:08:04 compute-0 podman[248741]: 2025-12-02 17:08:04.0534286 +0000 UTC m=+0.181434252 container cleanup e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 17:08:04 compute-0 systemd[1]: libpod-conmon-e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70.scope: Deactivated successfully.
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.099 189463 INFO nova.compute.manager [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.099 189463 DEBUG oslo.service.loopingcall [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.100 189463 DEBUG nova.compute.manager [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.100 189463 DEBUG nova.network.neutron [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:08:04 compute-0 podman[248790]: 2025-12-02 17:08:04.138562411 +0000 UTC m=+0.052375388 container remove e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.147 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f841cf50-3af3-4e27-8d76-ac6d0e0e196a]: (4, ('Tue Dec  2 05:08:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d (e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70)\ne5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70\nTue Dec  2 05:08:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d (e5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70)\ne5df9a1ae3d19c3e96f900c98cad234a4e3ce3e030173ea9729f521d3545ab70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.150 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d534b644-2a5e-413f-93a2-ead93109f27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.151 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de25f73-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:08:04 compute-0 kernel: tap0de25f73-f0: left promiscuous mode
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.154 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.161 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac0ac42-53bb-488c-b98d-cb488d3b2d8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.173 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.182 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0a5fb0-7a8f-42aa-85f0-aa02aad182ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.184 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[14a2eeac-15e9-45d2-9953-a2ee135119b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.202 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb0868d-859a-4e7f-9fdf-87b436d75148]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377176, 'reachable_time': 18695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248805, 'error': None, 'target': 'ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d0de25f73\x2df1ea\x2d4477\x2dbf20\x2dc9bdbb417b7d.mount: Deactivated successfully.
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.217 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de25f73-f1ea-4477-bf20-c9bdbb417b7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:08:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:08:04.218 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[0d09c681-5874-4362-ac14-ffc7cf489846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.945 189463 DEBUG nova.network.neutron [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:08:04 compute-0 nova_compute[189459]: 2025-12-02 17:08:04.966 189463 INFO nova.compute.manager [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Took 0.87 seconds to deallocate network for instance.#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.003 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.004 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.025 189463 DEBUG nova.compute.manager [req-772eee3b-70ce-4817-ac63-ab22a70a9201 req-ed0e85c6-8887-4830-9d3c-59e4ed1d80c9 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-deleted-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.071 189463 DEBUG nova.compute.provider_tree [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.085 189463 DEBUG nova.scheduler.client.report [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.104 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.130 189463 INFO nova.scheduler.client.report [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Deleted allocations for instance bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a#033[00m
Dec  2 17:08:05 compute-0 nova_compute[189459]: 2025-12-02 17:08:05.195 189463 DEBUG oslo_concurrency.lockutils [None req-e53cf5bc-248e-46e8-b499-934428df9c34 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.100 189463 DEBUG nova.compute.manager [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.101 189463 DEBUG oslo_concurrency.lockutils [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.101 189463 DEBUG oslo_concurrency.lockutils [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.102 189463 DEBUG oslo_concurrency.lockutils [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.102 189463 DEBUG nova.compute.manager [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] No waiting events found dispatching network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:08:06 compute-0 nova_compute[189459]: 2025-12-02 17:08:06.103 189463 WARNING nova.compute.manager [req-ccbc8286-13df-4d37-94ab-ec15a3092c70 req-0365b4ed-2b51-40af-81b3-da12d945202e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Received unexpected event network-vif-plugged-88cefba1-abc8-4573-900a-031390192acc for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:08:08 compute-0 nova_compute[189459]: 2025-12-02 17:08:08.097 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:09 compute-0 nova_compute[189459]: 2025-12-02 17:08:09.039 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:11 compute-0 podman[248807]: 2025-12-02 17:08:11.279332442 +0000 UTC m=+0.104993693 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 17:08:13 compute-0 nova_compute[189459]: 2025-12-02 17:08:13.101 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:14 compute-0 nova_compute[189459]: 2025-12-02 17:08:14.042 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:16 compute-0 podman[248829]: 2025-12-02 17:08:16.264092603 +0000 UTC m=+0.076225984 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  2 17:08:16 compute-0 podman[248830]: 2025-12-02 17:08:16.299118988 +0000 UTC m=+0.117626399 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:08:18 compute-0 nova_compute[189459]: 2025-12-02 17:08:18.104 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:18 compute-0 nova_compute[189459]: 2025-12-02 17:08:18.996 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695283.994155, bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:08:18 compute-0 nova_compute[189459]: 2025-12-02 17:08:18.996 189463 INFO nova.compute.manager [-] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:08:19 compute-0 nova_compute[189459]: 2025-12-02 17:08:19.043 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:19 compute-0 nova_compute[189459]: 2025-12-02 17:08:19.062 189463 DEBUG nova.compute.manager [None req-46dc8b17-33bf-4705-8e54-16a6dc4ab877 - - - - - -] [instance: bb686cbf-bbdb-44e1-8341-eb4d6b5cb69a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:08:20 compute-0 podman[248870]: 2025-12-02 17:08:20.234310687 +0000 UTC m=+0.060186667 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:08:20 compute-0 podman[248869]: 2025-12-02 17:08:20.241158999 +0000 UTC m=+0.068071017 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:08:20 compute-0 podman[248868]: 2025-12-02 17:08:20.28617881 +0000 UTC m=+0.114422513 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:08:22 compute-0 nova_compute[189459]: 2025-12-02 17:08:22.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:23 compute-0 nova_compute[189459]: 2025-12-02 17:08:23.106 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:23 compute-0 nova_compute[189459]: 2025-12-02 17:08:23.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:24 compute-0 nova_compute[189459]: 2025-12-02 17:08:24.045 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:25 compute-0 nova_compute[189459]: 2025-12-02 17:08:25.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:25 compute-0 nova_compute[189459]: 2025-12-02 17:08:25.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:26 compute-0 nova_compute[189459]: 2025-12-02 17:08:26.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:28 compute-0 nova_compute[189459]: 2025-12-02 17:08:28.110 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:28 compute-0 nova_compute[189459]: 2025-12-02 17:08:28.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:28 compute-0 nova_compute[189459]: 2025-12-02 17:08:28.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:08:28 compute-0 nova_compute[189459]: 2025-12-02 17:08:28.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.049 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:29 compute-0 podman[248922]: 2025-12-02 17:08:29.289241386 +0000 UTC m=+0.108285260 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:08:29 compute-0 podman[248921]: 2025-12-02 17:08:29.290194792 +0000 UTC m=+0.108879756 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:08:29 compute-0 podman[248920]: 2025-12-02 17:08:29.308912781 +0000 UTC m=+0.138816855 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.439 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.439 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.440 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:08:29 compute-0 podman[203941]: time="2025-12-02T17:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:08:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:08:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4317 "" "Go-http-client/1.1"
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.838 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.839 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5399MB free_disk=72.1977653503418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.839 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.839 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.917 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.918 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.948 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.971 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.997 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:08:29 compute-0 nova_compute[189459]: 2025-12-02 17:08:29.998 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:08:31 compute-0 nova_compute[189459]: 2025-12-02 17:08:31.002 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: ERROR   17:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: ERROR   17:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: ERROR   17:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: ERROR   17:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: ERROR   17:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:08:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:08:32 compute-0 nova_compute[189459]: 2025-12-02 17:08:32.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:33 compute-0 nova_compute[189459]: 2025-12-02 17:08:33.112 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:34 compute-0 nova_compute[189459]: 2025-12-02 17:08:34.052 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:34 compute-0 nova_compute[189459]: 2025-12-02 17:08:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:08:34 compute-0 nova_compute[189459]: 2025-12-02 17:08:34.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:08:34 compute-0 ovn_controller[97975]: 2025-12-02T17:08:34Z|00064|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  2 17:08:38 compute-0 nova_compute[189459]: 2025-12-02 17:08:38.115 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:39 compute-0 nova_compute[189459]: 2025-12-02 17:08:39.055 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:42 compute-0 podman[248989]: 2025-12-02 17:08:42.258878236 +0000 UTC m=+0.086264743 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec  2 17:08:43 compute-0 nova_compute[189459]: 2025-12-02 17:08:43.116 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:44 compute-0 nova_compute[189459]: 2025-12-02 17:08:44.059 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:47 compute-0 podman[249010]: 2025-12-02 17:08:47.279902785 +0000 UTC m=+0.104040367 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:08:47 compute-0 podman[249009]: 2025-12-02 17:08:47.281864647 +0000 UTC m=+0.099068064 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  2 17:08:48 compute-0 nova_compute[189459]: 2025-12-02 17:08:48.119 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:49 compute-0 nova_compute[189459]: 2025-12-02 17:08:49.062 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:51 compute-0 podman[249051]: 2025-12-02 17:08:51.264015808 +0000 UTC m=+0.075505145 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:08:51 compute-0 podman[249049]: 2025-12-02 17:08:51.294604514 +0000 UTC m=+0.111087034 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:08:51 compute-0 podman[249050]: 2025-12-02 17:08:51.309916353 +0000 UTC m=+0.115201065 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, architecture=x86_64, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.4, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec  2 17:08:53 compute-0 nova_compute[189459]: 2025-12-02 17:08:53.123 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:54 compute-0 nova_compute[189459]: 2025-12-02 17:08:54.065 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:58 compute-0 nova_compute[189459]: 2025-12-02 17:08:58.124 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:59 compute-0 nova_compute[189459]: 2025-12-02 17:08:59.069 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:08:59 compute-0 podman[203941]: time="2025-12-02T17:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:08:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:08:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4315 "" "Go-http-client/1.1"
Dec  2 17:09:00 compute-0 podman[249108]: 2025-12-02 17:09:00.281273623 +0000 UTC m=+0.094078281 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:09:00 compute-0 podman[249107]: 2025-12-02 17:09:00.29988137 +0000 UTC m=+0.130059102 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec  2 17:09:00 compute-0 podman[249109]: 2025-12-02 17:09:00.320340125 +0000 UTC m=+0.128333475 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: ERROR   17:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: ERROR   17:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: ERROR   17:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: ERROR   17:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: ERROR   17:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:09:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:09:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:09:01.879 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:09:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:09:01.879 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:09:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:09:01.880 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.052 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.053 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d72d250>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:09:03.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:09:03 compute-0 nova_compute[189459]: 2025-12-02 17:09:03.127 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:04 compute-0 nova_compute[189459]: 2025-12-02 17:09:04.073 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:08 compute-0 nova_compute[189459]: 2025-12-02 17:09:08.129 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:09 compute-0 nova_compute[189459]: 2025-12-02 17:09:09.075 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:13 compute-0 nova_compute[189459]: 2025-12-02 17:09:13.131 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:13 compute-0 podman[249179]: 2025-12-02 17:09:13.275658533 +0000 UTC m=+0.102854405 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:09:14 compute-0 nova_compute[189459]: 2025-12-02 17:09:14.078 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:18 compute-0 nova_compute[189459]: 2025-12-02 17:09:18.134 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:18 compute-0 podman[249201]: 2025-12-02 17:09:18.268513039 +0000 UTC m=+0.087400683 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:09:18 compute-0 podman[249200]: 2025-12-02 17:09:18.288967045 +0000 UTC m=+0.112208615 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 17:09:19 compute-0 nova_compute[189459]: 2025-12-02 17:09:19.083 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:22 compute-0 podman[249239]: 2025-12-02 17:09:22.259452794 +0000 UTC m=+0.075883195 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  2 17:09:22 compute-0 podman[249238]: 2025-12-02 17:09:22.26826918 +0000 UTC m=+0.090382213 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vendor=Red Hat, Inc., name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container)
Dec  2 17:09:22 compute-0 podman[249237]: 2025-12-02 17:09:22.269247596 +0000 UTC m=+0.095481559 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:09:23 compute-0 nova_compute[189459]: 2025-12-02 17:09:23.138 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:23 compute-0 nova_compute[189459]: 2025-12-02 17:09:23.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:24 compute-0 nova_compute[189459]: 2025-12-02 17:09:24.087 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:25 compute-0 nova_compute[189459]: 2025-12-02 17:09:25.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:26 compute-0 nova_compute[189459]: 2025-12-02 17:09:26.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.140 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:09:28 compute-0 nova_compute[189459]: 2025-12-02 17:09:28.434 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:09:29 compute-0 nova_compute[189459]: 2025-12-02 17:09:29.090 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:29 compute-0 podman[203941]: time="2025-12-02T17:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:09:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:09:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4310 "" "Go-http-client/1.1"
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.441 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.441 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.442 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.809 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.810 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.19774627685547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.811 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.811 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.983 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:09:30 compute-0 nova_compute[189459]: 2025-12-02 17:09:30.984 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:09:31 compute-0 nova_compute[189459]: 2025-12-02 17:09:31.013 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:09:31 compute-0 nova_compute[189459]: 2025-12-02 17:09:31.027 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:09:31 compute-0 nova_compute[189459]: 2025-12-02 17:09:31.028 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:09:31 compute-0 nova_compute[189459]: 2025-12-02 17:09:31.029 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:09:31 compute-0 podman[249295]: 2025-12-02 17:09:31.254990591 +0000 UTC m=+0.072046453 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:09:31 compute-0 podman[249294]: 2025-12-02 17:09:31.284173359 +0000 UTC m=+0.100122482 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:09:31 compute-0 podman[249293]: 2025-12-02 17:09:31.29506444 +0000 UTC m=+0.121741169 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: ERROR   17:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: ERROR   17:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: ERROR   17:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: ERROR   17:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: ERROR   17:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:09:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:09:33 compute-0 nova_compute[189459]: 2025-12-02 17:09:33.029 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:33 compute-0 nova_compute[189459]: 2025-12-02 17:09:33.143 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:33 compute-0 nova_compute[189459]: 2025-12-02 17:09:33.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:34 compute-0 nova_compute[189459]: 2025-12-02 17:09:34.093 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:34 compute-0 nova_compute[189459]: 2025-12-02 17:09:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:09:34 compute-0 nova_compute[189459]: 2025-12-02 17:09:34.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:09:38 compute-0 nova_compute[189459]: 2025-12-02 17:09:38.145 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:39 compute-0 nova_compute[189459]: 2025-12-02 17:09:39.096 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:43 compute-0 nova_compute[189459]: 2025-12-02 17:09:43.147 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:44 compute-0 nova_compute[189459]: 2025-12-02 17:09:44.100 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:44 compute-0 podman[249366]: 2025-12-02 17:09:44.259234285 +0000 UTC m=+0.085515213 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Dec  2 17:09:48 compute-0 nova_compute[189459]: 2025-12-02 17:09:48.149 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:49 compute-0 nova_compute[189459]: 2025-12-02 17:09:49.104 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:49 compute-0 podman[249386]: 2025-12-02 17:09:49.260965147 +0000 UTC m=+0.093103295 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:09:49 compute-0 podman[249387]: 2025-12-02 17:09:49.265285952 +0000 UTC m=+0.081171407 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 17:09:53 compute-0 nova_compute[189459]: 2025-12-02 17:09:53.151 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:53 compute-0 podman[249425]: 2025-12-02 17:09:53.271644819 +0000 UTC m=+0.072871945 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:09:53 compute-0 podman[249424]: 2025-12-02 17:09:53.291304093 +0000 UTC m=+0.110342004 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, container_name=kepler, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_id=edpm, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:09:53 compute-0 podman[249423]: 2025-12-02 17:09:53.289137756 +0000 UTC m=+0.111801974 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  2 17:09:54 compute-0 nova_compute[189459]: 2025-12-02 17:09:54.106 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:58 compute-0 nova_compute[189459]: 2025-12-02 17:09:58.156 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:59 compute-0 nova_compute[189459]: 2025-12-02 17:09:59.111 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:09:59 compute-0 podman[203941]: time="2025-12-02T17:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:09:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:09:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4299 "" "Go-http-client/1.1"
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: ERROR   17:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: ERROR   17:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: ERROR   17:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: ERROR   17:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: ERROR   17:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:10:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:10:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:10:01.880 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:10:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:10:01.882 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:10:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:10:01.882 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:10:02 compute-0 podman[249481]: 2025-12-02 17:10:02.250103389 +0000 UTC m=+0.065885079 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:10:02 compute-0 podman[249480]: 2025-12-02 17:10:02.252101592 +0000 UTC m=+0.073639285 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:10:02 compute-0 podman[249479]: 2025-12-02 17:10:02.323322933 +0000 UTC m=+0.151000860 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:10:03 compute-0 nova_compute[189459]: 2025-12-02 17:10:03.157 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:04 compute-0 nova_compute[189459]: 2025-12-02 17:10:04.113 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:08 compute-0 nova_compute[189459]: 2025-12-02 17:10:08.167 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:09 compute-0 nova_compute[189459]: 2025-12-02 17:10:09.123 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:13 compute-0 nova_compute[189459]: 2025-12-02 17:10:13.171 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:14 compute-0 nova_compute[189459]: 2025-12-02 17:10:14.125 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:14 compute-0 podman[249551]: 2025-12-02 17:10:14.78188161 +0000 UTC m=+0.111962828 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec  2 17:10:18 compute-0 nova_compute[189459]: 2025-12-02 17:10:18.174 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:19 compute-0 nova_compute[189459]: 2025-12-02 17:10:19.129 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:20 compute-0 podman[249576]: 2025-12-02 17:10:20.252326978 +0000 UTC m=+0.078072394 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:10:20 compute-0 podman[249575]: 2025-12-02 17:10:20.282550874 +0000 UTC m=+0.103590984 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:10:23 compute-0 nova_compute[189459]: 2025-12-02 17:10:23.177 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:23 compute-0 nova_compute[189459]: 2025-12-02 17:10:23.751 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:24 compute-0 nova_compute[189459]: 2025-12-02 17:10:24.132 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:24 compute-0 podman[249616]: 2025-12-02 17:10:24.241159306 +0000 UTC m=+0.069148906 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:10:24 compute-0 podman[249618]: 2025-12-02 17:10:24.250618438 +0000 UTC m=+0.069478914 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:10:24 compute-0 podman[249617]: 2025-12-02 17:10:24.275666877 +0000 UTC m=+0.103013730 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 17:10:24 compute-0 nova_compute[189459]: 2025-12-02 17:10:24.415 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:26 compute-0 nova_compute[189459]: 2025-12-02 17:10:26.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:26 compute-0 nova_compute[189459]: 2025-12-02 17:10:26.425 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:27 compute-0 nova_compute[189459]: 2025-12-02 17:10:27.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.180 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.442 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.443 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.444 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:10:28 compute-0 nova_compute[189459]: 2025-12-02 17:10:28.467 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:10:29 compute-0 nova_compute[189459]: 2025-12-02 17:10:29.136 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:29 compute-0 podman[203941]: time="2025-12-02T17:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:10:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:10:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4312 "" "Go-http-client/1.1"
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.435 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.436 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.474 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.474 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.475 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.475 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.948 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.949 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5387MB free_disk=72.19776916503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.950 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:10:30 compute-0 nova_compute[189459]: 2025-12-02 17:10:30.950 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.253 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.254 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.382 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: ERROR   17:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: ERROR   17:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: ERROR   17:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: ERROR   17:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: ERROR   17:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:10:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.472 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.473 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.493 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.513 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.542 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.560 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.562 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:10:31 compute-0 nova_compute[189459]: 2025-12-02 17:10:31.563 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:10:33 compute-0 nova_compute[189459]: 2025-12-02 17:10:33.182 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:33 compute-0 podman[249671]: 2025-12-02 17:10:33.265346765 +0000 UTC m=+0.089440927 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:10:33 compute-0 podman[249672]: 2025-12-02 17:10:33.279473342 +0000 UTC m=+0.095290213 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:10:33 compute-0 podman[249670]: 2025-12-02 17:10:33.324639657 +0000 UTC m=+0.143925661 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 17:10:34 compute-0 nova_compute[189459]: 2025-12-02 17:10:34.139 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:34 compute-0 nova_compute[189459]: 2025-12-02 17:10:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:34 compute-0 nova_compute[189459]: 2025-12-02 17:10:34.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:34 compute-0 nova_compute[189459]: 2025-12-02 17:10:34.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:35 compute-0 nova_compute[189459]: 2025-12-02 17:10:35.431 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:35 compute-0 nova_compute[189459]: 2025-12-02 17:10:35.432 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:10:38 compute-0 nova_compute[189459]: 2025-12-02 17:10:38.184 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:39 compute-0 nova_compute[189459]: 2025-12-02 17:10:39.144 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:40 compute-0 nova_compute[189459]: 2025-12-02 17:10:40.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:10:40 compute-0 nova_compute[189459]: 2025-12-02 17:10:40.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:10:43 compute-0 nova_compute[189459]: 2025-12-02 17:10:43.188 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:44 compute-0 nova_compute[189459]: 2025-12-02 17:10:44.148 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:45 compute-0 podman[249742]: 2025-12-02 17:10:45.273734834 +0000 UTC m=+0.094658346 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:10:48 compute-0 nova_compute[189459]: 2025-12-02 17:10:48.191 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:49 compute-0 nova_compute[189459]: 2025-12-02 17:10:49.152 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:51 compute-0 podman[249762]: 2025-12-02 17:10:51.27668965 +0000 UTC m=+0.107300173 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:10:51 compute-0 podman[249763]: 2025-12-02 17:10:51.292158393 +0000 UTC m=+0.106560944 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:10:53 compute-0 nova_compute[189459]: 2025-12-02 17:10:53.192 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:54 compute-0 nova_compute[189459]: 2025-12-02 17:10:54.156 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:55 compute-0 podman[249798]: 2025-12-02 17:10:55.276921004 +0000 UTC m=+0.100034181 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 17:10:55 compute-0 podman[249802]: 2025-12-02 17:10:55.280048967 +0000 UTC m=+0.087866775 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  2 17:10:55 compute-0 podman[249799]: 2025-12-02 17:10:55.305799344 +0000 UTC m=+0.124226465 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:10:58 compute-0 nova_compute[189459]: 2025-12-02 17:10:58.195 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:59 compute-0 nova_compute[189459]: 2025-12-02 17:10:59.160 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:10:59 compute-0 podman[203941]: time="2025-12-02T17:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:10:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:10:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4318 "" "Go-http-client/1.1"
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: ERROR   17:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: ERROR   17:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: ERROR   17:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: ERROR   17:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: ERROR   17:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:11:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:11:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:11:01.881 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:11:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:11:01.882 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:11:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:11:01.882 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:11:02 compute-0 nova_compute[189459]: 2025-12-02 17:11:02.748 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.053 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.053 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:11:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:11:03 compute-0 nova_compute[189459]: 2025-12-02 17:11:03.198 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:04 compute-0 nova_compute[189459]: 2025-12-02 17:11:04.162 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:04 compute-0 podman[249853]: 2025-12-02 17:11:04.261951519 +0000 UTC m=+0.082640586 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:11:04 compute-0 podman[249854]: 2025-12-02 17:11:04.283883734 +0000 UTC m=+0.100442051 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:11:04 compute-0 podman[249852]: 2025-12-02 17:11:04.298668968 +0000 UTC m=+0.125364965 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 17:11:08 compute-0 nova_compute[189459]: 2025-12-02 17:11:08.201 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:09 compute-0 nova_compute[189459]: 2025-12-02 17:11:09.167 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:13 compute-0 nova_compute[189459]: 2025-12-02 17:11:13.203 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:14 compute-0 nova_compute[189459]: 2025-12-02 17:11:14.171 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:16 compute-0 podman[249922]: 2025-12-02 17:11:16.281643638 +0000 UTC m=+0.098863789 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:11:18 compute-0 nova_compute[189459]: 2025-12-02 17:11:18.207 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:19 compute-0 nova_compute[189459]: 2025-12-02 17:11:19.175 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:22 compute-0 podman[249944]: 2025-12-02 17:11:22.264461525 +0000 UTC m=+0.094734448 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:11:22 compute-0 podman[249945]: 2025-12-02 17:11:22.268485803 +0000 UTC m=+0.095592382 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec  2 17:11:23 compute-0 nova_compute[189459]: 2025-12-02 17:11:23.209 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:24 compute-0 nova_compute[189459]: 2025-12-02 17:11:24.178 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:25 compute-0 nova_compute[189459]: 2025-12-02 17:11:25.449 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:26 compute-0 podman[249984]: 2025-12-02 17:11:26.261604757 +0000 UTC m=+0.097311857 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 17:11:26 compute-0 podman[249986]: 2025-12-02 17:11:26.268524422 +0000 UTC m=+0.081478215 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:11:26 compute-0 podman[249985]: 2025-12-02 17:11:26.27782465 +0000 UTC m=+0.101856919 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, version=9.4, managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0)
Dec  2 17:11:27 compute-0 nova_compute[189459]: 2025-12-02 17:11:27.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:28 compute-0 nova_compute[189459]: 2025-12-02 17:11:28.211 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:28 compute-0 nova_compute[189459]: 2025-12-02 17:11:28.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:28 compute-0 nova_compute[189459]: 2025-12-02 17:11:28.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:11:28 compute-0 nova_compute[189459]: 2025-12-02 17:11:28.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:11:28 compute-0 nova_compute[189459]: 2025-12-02 17:11:28.556 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:11:29 compute-0 nova_compute[189459]: 2025-12-02 17:11:29.182 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:29 compute-0 nova_compute[189459]: 2025-12-02 17:11:29.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:29 compute-0 podman[203941]: time="2025-12-02T17:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:11:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:11:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4309 "" "Go-http-client/1.1"
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.448 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.449 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.450 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.450 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.850 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.852 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5392MB free_disk=72.19776916503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.853 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.854 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.929 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.929 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.956 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.971 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.973 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:11:30 compute-0 nova_compute[189459]: 2025-12-02 17:11:30.974 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: ERROR   17:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: ERROR   17:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: ERROR   17:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: ERROR   17:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: ERROR   17:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:11:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:11:31 compute-0 nova_compute[189459]: 2025-12-02 17:11:31.970 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:33 compute-0 nova_compute[189459]: 2025-12-02 17:11:33.213 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:34 compute-0 nova_compute[189459]: 2025-12-02 17:11:34.184 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:34 compute-0 nova_compute[189459]: 2025-12-02 17:11:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:35 compute-0 podman[250037]: 2025-12-02 17:11:35.277462574 +0000 UTC m=+0.091719168 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:11:35 compute-0 podman[250038]: 2025-12-02 17:11:35.306310114 +0000 UTC m=+0.109298037 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:11:35 compute-0 podman[250036]: 2025-12-02 17:11:35.332021599 +0000 UTC m=+0.146675094 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:11:35 compute-0 nova_compute[189459]: 2025-12-02 17:11:35.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:35 compute-0 nova_compute[189459]: 2025-12-02 17:11:35.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:11:36 compute-0 nova_compute[189459]: 2025-12-02 17:11:36.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:11:38 compute-0 nova_compute[189459]: 2025-12-02 17:11:38.217 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:39 compute-0 nova_compute[189459]: 2025-12-02 17:11:39.186 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:43 compute-0 nova_compute[189459]: 2025-12-02 17:11:43.221 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:44 compute-0 nova_compute[189459]: 2025-12-02 17:11:44.189 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:47 compute-0 podman[250109]: 2025-12-02 17:11:47.287420806 +0000 UTC m=+0.107511023 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:11:48 compute-0 nova_compute[189459]: 2025-12-02 17:11:48.223 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:49 compute-0 nova_compute[189459]: 2025-12-02 17:11:49.192 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:53 compute-0 nova_compute[189459]: 2025-12-02 17:11:53.226 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:53 compute-0 podman[250129]: 2025-12-02 17:11:53.283103209 +0000 UTC m=+0.115192606 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:11:53 compute-0 podman[250130]: 2025-12-02 17:11:53.291878161 +0000 UTC m=+0.121038011 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:11:54 compute-0 nova_compute[189459]: 2025-12-02 17:11:54.195 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:57 compute-0 podman[250167]: 2025-12-02 17:11:57.252037433 +0000 UTC m=+0.080173961 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  2 17:11:57 compute-0 podman[250169]: 2025-12-02 17:11:57.26100603 +0000 UTC m=+0.073202496 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 17:11:57 compute-0 podman[250168]: 2025-12-02 17:11:57.280739362 +0000 UTC m=+0.095479285 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:11:58 compute-0 nova_compute[189459]: 2025-12-02 17:11:58.228 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:59 compute-0 nova_compute[189459]: 2025-12-02 17:11:59.199 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:11:59 compute-0 podman[203941]: time="2025-12-02T17:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:11:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:11:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4314 "" "Go-http-client/1.1"
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: ERROR   17:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: ERROR   17:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: ERROR   17:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: ERROR   17:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: ERROR   17:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:12:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:12:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:01.883 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:12:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:01.884 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:12:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:01.884 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:12:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:02.948 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:12:02 compute-0 nova_compute[189459]: 2025-12-02 17:12:02.951 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:02.950 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:12:03 compute-0 nova_compute[189459]: 2025-12-02 17:12:03.231 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:04 compute-0 nova_compute[189459]: 2025-12-02 17:12:04.202 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:12:04.954 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:12:06 compute-0 podman[250222]: 2025-12-02 17:12:06.272923251 +0000 UTC m=+0.080461668 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:12:06 compute-0 podman[250221]: 2025-12-02 17:12:06.283958613 +0000 UTC m=+0.099424840 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:12:06 compute-0 podman[250220]: 2025-12-02 17:12:06.358297168 +0000 UTC m=+0.183310737 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  2 17:12:08 compute-0 nova_compute[189459]: 2025-12-02 17:12:08.233 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:09 compute-0 nova_compute[189459]: 2025-12-02 17:12:09.205 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:13 compute-0 nova_compute[189459]: 2025-12-02 17:12:13.236 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:14 compute-0 nova_compute[189459]: 2025-12-02 17:12:14.209 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:18 compute-0 nova_compute[189459]: 2025-12-02 17:12:18.239 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:18 compute-0 podman[250293]: 2025-12-02 17:12:18.307502057 +0000 UTC m=+0.123470815 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec  2 17:12:19 compute-0 nova_compute[189459]: 2025-12-02 17:12:19.216 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:23 compute-0 nova_compute[189459]: 2025-12-02 17:12:23.242 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:24 compute-0 nova_compute[189459]: 2025-12-02 17:12:24.218 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:24 compute-0 podman[250314]: 2025-12-02 17:12:24.25405058 +0000 UTC m=+0.074586122 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:12:24 compute-0 podman[250313]: 2025-12-02 17:12:24.290911235 +0000 UTC m=+0.115734461 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:12:27 compute-0 nova_compute[189459]: 2025-12-02 17:12:27.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:27 compute-0 nova_compute[189459]: 2025-12-02 17:12:27.425 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:28 compute-0 nova_compute[189459]: 2025-12-02 17:12:28.249 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:28 compute-0 podman[250354]: 2025-12-02 17:12:28.298183801 +0000 UTC m=+0.107217825 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi)
Dec  2 17:12:28 compute-0 podman[250356]: 2025-12-02 17:12:28.313163847 +0000 UTC m=+0.109201058 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:12:28 compute-0 podman[250355]: 2025-12-02 17:12:28.317291136 +0000 UTC m=+0.134648750 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:12:28 compute-0 nova_compute[189459]: 2025-12-02 17:12:28.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:29 compute-0 nova_compute[189459]: 2025-12-02 17:12:29.221 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:29 compute-0 nova_compute[189459]: 2025-12-02 17:12:29.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:29 compute-0 podman[203941]: time="2025-12-02T17:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:12:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:12:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4316 "" "Go-http-client/1.1"
Dec  2 17:12:30 compute-0 nova_compute[189459]: 2025-12-02 17:12:30.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:30 compute-0 nova_compute[189459]: 2025-12-02 17:12:30.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:12:30 compute-0 nova_compute[189459]: 2025-12-02 17:12:30.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:12:30 compute-0 nova_compute[189459]: 2025-12-02 17:12:30.439 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: ERROR   17:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: ERROR   17:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: ERROR   17:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: ERROR   17:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: ERROR   17:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:12:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.469 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.470 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.470 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.471 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.836 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.838 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5397MB free_disk=72.19776916503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.838 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.839 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.938 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.938 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:12:31 compute-0 nova_compute[189459]: 2025-12-02 17:12:31.985 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:12:32 compute-0 nova_compute[189459]: 2025-12-02 17:12:32.016 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:12:32 compute-0 nova_compute[189459]: 2025-12-02 17:12:32.018 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:12:32 compute-0 nova_compute[189459]: 2025-12-02 17:12:32.019 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:12:33 compute-0 nova_compute[189459]: 2025-12-02 17:12:33.015 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:33 compute-0 ovn_controller[97975]: 2025-12-02T17:12:33Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  2 17:12:33 compute-0 nova_compute[189459]: 2025-12-02 17:12:33.248 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:34 compute-0 nova_compute[189459]: 2025-12-02 17:12:34.224 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:34 compute-0 nova_compute[189459]: 2025-12-02 17:12:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:36 compute-0 nova_compute[189459]: 2025-12-02 17:12:36.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:36 compute-0 nova_compute[189459]: 2025-12-02 17:12:36.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:12:37 compute-0 podman[250410]: 2025-12-02 17:12:37.278025982 +0000 UTC m=+0.095190378 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:12:37 compute-0 podman[250409]: 2025-12-02 17:12:37.282237153 +0000 UTC m=+0.095245529 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:12:37 compute-0 podman[250408]: 2025-12-02 17:12:37.331771142 +0000 UTC m=+0.161425668 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:12:37 compute-0 nova_compute[189459]: 2025-12-02 17:12:37.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:12:38 compute-0 nova_compute[189459]: 2025-12-02 17:12:38.251 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:39 compute-0 nova_compute[189459]: 2025-12-02 17:12:39.229 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:42 compute-0 nova_compute[189459]: 2025-12-02 17:12:42.197 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:43 compute-0 nova_compute[189459]: 2025-12-02 17:12:43.254 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:43 compute-0 nova_compute[189459]: 2025-12-02 17:12:43.633 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:43 compute-0 nova_compute[189459]: 2025-12-02 17:12:43.684 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:44 compute-0 nova_compute[189459]: 2025-12-02 17:12:44.233 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:44 compute-0 nova_compute[189459]: 2025-12-02 17:12:44.448 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:45 compute-0 nova_compute[189459]: 2025-12-02 17:12:45.322 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:48 compute-0 nova_compute[189459]: 2025-12-02 17:12:48.258 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:48 compute-0 nova_compute[189459]: 2025-12-02 17:12:48.494 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:49 compute-0 nova_compute[189459]: 2025-12-02 17:12:49.235 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:49 compute-0 podman[250478]: 2025-12-02 17:12:49.282293645 +0000 UTC m=+0.101347680 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  2 17:12:51 compute-0 nova_compute[189459]: 2025-12-02 17:12:51.819 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:51 compute-0 nova_compute[189459]: 2025-12-02 17:12:51.851 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:52 compute-0 nova_compute[189459]: 2025-12-02 17:12:52.003 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:52 compute-0 nova_compute[189459]: 2025-12-02 17:12:52.372 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:53 compute-0 nova_compute[189459]: 2025-12-02 17:12:53.261 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:54 compute-0 nova_compute[189459]: 2025-12-02 17:12:54.239 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:55 compute-0 podman[250501]: 2025-12-02 17:12:55.261969213 +0000 UTC m=+0.085915232 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 17:12:55 compute-0 podman[250502]: 2025-12-02 17:12:55.276167069 +0000 UTC m=+0.094093409 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 17:12:58 compute-0 nova_compute[189459]: 2025-12-02 17:12:58.264 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:59 compute-0 nova_compute[189459]: 2025-12-02 17:12:59.242 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:12:59 compute-0 podman[250541]: 2025-12-02 17:12:59.251574443 +0000 UTC m=+0.075543818 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  2 17:12:59 compute-0 podman[250543]: 2025-12-02 17:12:59.263992091 +0000 UTC m=+0.077793507 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  2 17:12:59 compute-0 podman[250542]: 2025-12-02 17:12:59.288479158 +0000 UTC m=+0.107225895 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Dec  2 17:12:59 compute-0 podman[203941]: time="2025-12-02T17:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:12:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:12:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4313 "" "Go-http-client/1.1"
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: ERROR   17:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: ERROR   17:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: ERROR   17:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: ERROR   17:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: ERROR   17:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:13:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:13:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:01.884 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:01.885 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:01.885 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.196 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.197 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.213 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.320 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.321 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.329 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.329 189463 INFO nova.compute.claims [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.465 189463 DEBUG nova.compute.provider_tree [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.479 189463 DEBUG nova.scheduler.client.report [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.512 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.512 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.553 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.554 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.582 189463 INFO nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.600 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.724 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.725 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.726 189463 INFO nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Creating image(s)#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.727 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.728 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.729 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.730 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:02 compute-0 nova_compute[189459]: 2025-12-02 17:13:02.731 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:03 compute-0 nova_compute[189459]: 2025-12-02 17:13:03.009 189463 DEBUG nova.policy [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '034f84ff036e4d7ca94cfd14dd7f4967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3e60fbd301d4ffb8e3a4b2b966f6692', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.054 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.054 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:13:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:13:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:03.211 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:03.212 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:13:03 compute-0 nova_compute[189459]: 2025-12-02 17:13:03.215 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:03 compute-0 nova_compute[189459]: 2025-12-02 17:13:03.265 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.244 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.381 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.473 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.part --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.474 189463 DEBUG nova.virt.images [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] b90f8403-6db1-4b01-bb62-c5b878a5c904 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.475 189463 DEBUG nova.privsep.utils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.475 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.part /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.712 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.part /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.converted" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.719 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.756 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Successfully created port: 59ab5bcf-4e2c-416f-9177-0f4f749195df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.787 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.788 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.803 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.881 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.883 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.883 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.895 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.956 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:04 compute-0 nova_compute[189459]: 2025-12-02 17:13:04.957 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.054 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk 1073741824" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.060 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.061 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.168 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.169 189463 DEBUG nova.virt.disk.api [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Checking if we can resize image /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.170 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.234 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.235 189463 DEBUG nova.virt.disk.api [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Cannot resize image /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.236 189463 DEBUG nova.objects.instance [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lazy-loading 'migration_context' on Instance uuid 02b43864-1632-4352-92f8-bbf244d2c94b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.266 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.268 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Ensure instance console log exists: /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.269 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.270 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.271 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.385 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.386 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.473 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.627 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.628 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.717 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.738 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.739 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.749 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.749 189463 INFO nova.compute.claims [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:13:05 compute-0 nova_compute[189459]: 2025-12-02 17:13:05.941 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.100 189463 DEBUG nova.compute.provider_tree [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.291 189463 DEBUG nova.scheduler.client.report [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.384 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.386 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.390 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.397 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.398 189463 INFO nova.compute.claims [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.486 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.486 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.542 189463 INFO nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.563 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.676 189463 DEBUG nova.compute.provider_tree [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.979 189463 DEBUG nova.scheduler.client.report [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.990 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.993 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.994 189463 INFO nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Creating image(s)#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.996 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.997 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:06 compute-0 nova_compute[189459]: 2025-12-02 17:13:06.998 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.027 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.047 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.049 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.090 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.091 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.092 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.120 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.168 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.169 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.203 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.203 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.232 189463 INFO nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.282 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.320 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk 1073741824" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.321 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.322 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.341 189463 DEBUG nova.policy [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c800961435cb4a418a6ee67240a574fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95abfdbd702a49dc89fc01dd45a4e014', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.385 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.387 189463 DEBUG nova.virt.disk.api [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Checking if we can resize image /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.387 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.455 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.456 189463 DEBUG nova.virt.disk.api [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Cannot resize image /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.457 189463 DEBUG nova.objects.instance [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'migration_context' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.482 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.483 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Ensure instance console log exists: /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.483 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.484 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.484 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.531 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.533 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.533 189463 INFO nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Creating image(s)#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.534 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.534 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.536 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.550 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.612 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.614 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.615 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.627 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.694 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.695 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.732 189463 DEBUG nova.policy [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81bb015501444821b1071aa660223a05', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed6ce0cd7d04a178c199ead64cc2506', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.738 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.739 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.740 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.799 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.800 189463 DEBUG nova.virt.disk.api [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Checking if we can resize image /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.801 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.863 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.864 189463 DEBUG nova.virt.disk.api [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Cannot resize image /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.865 189463 DEBUG nova.objects.instance [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lazy-loading 'migration_context' on Instance uuid 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.879 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.880 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Ensure instance console log exists: /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.881 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.881 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:07 compute-0 nova_compute[189459]: 2025-12-02 17:13:07.882 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:08 compute-0 podman[250654]: 2025-12-02 17:13:08.264326214 +0000 UTC m=+0.086203510 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:13:08 compute-0 nova_compute[189459]: 2025-12-02 17:13:08.269 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:08 compute-0 podman[250653]: 2025-12-02 17:13:08.27021009 +0000 UTC m=+0.095288470 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:13:08 compute-0 podman[250652]: 2025-12-02 17:13:08.29820479 +0000 UTC m=+0.128769406 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:13:08 compute-0 nova_compute[189459]: 2025-12-02 17:13:08.842 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Successfully updated port: 59ab5bcf-4e2c-416f-9177-0f4f749195df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:13:08 compute-0 nova_compute[189459]: 2025-12-02 17:13:08.902 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:08 compute-0 nova_compute[189459]: 2025-12-02 17:13:08.903 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquired lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:08 compute-0 nova_compute[189459]: 2025-12-02 17:13:08.903 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:13:09 compute-0 nova_compute[189459]: 2025-12-02 17:13:09.247 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:09 compute-0 nova_compute[189459]: 2025-12-02 17:13:09.362 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:13:09 compute-0 nova_compute[189459]: 2025-12-02 17:13:09.662 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Successfully created port: 5f7c429b-020f-4314-b208-6820880dcf81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:13:10 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:10.214 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:10 compute-0 nova_compute[189459]: 2025-12-02 17:13:10.697 189463 DEBUG nova.compute.manager [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-changed-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:10 compute-0 nova_compute[189459]: 2025-12-02 17:13:10.698 189463 DEBUG nova.compute.manager [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Refreshing instance network info cache due to event network-changed-59ab5bcf-4e2c-416f-9177-0f4f749195df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:10 compute-0 nova_compute[189459]: 2025-12-02 17:13:10.698 189463 DEBUG oslo_concurrency.lockutils [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:10 compute-0 nova_compute[189459]: 2025-12-02 17:13:10.917 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Successfully created port: 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.157 189463 DEBUG nova.network.neutron [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updating instance_info_cache with network_info: [{"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.178 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Releasing lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.179 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Instance network_info: |[{"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.179 189463 DEBUG oslo_concurrency.lockutils [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.180 189463 DEBUG nova.network.neutron [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Refreshing network info cache for port 59ab5bcf-4e2c-416f-9177-0f4f749195df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.183 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Start _get_guest_xml network_info=[{"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.192 189463 WARNING nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.201 189463 DEBUG nova.virt.libvirt.host [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.202 189463 DEBUG nova.virt.libvirt.host [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.211 189463 DEBUG nova.virt.libvirt.host [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.212 189463 DEBUG nova.virt.libvirt.host [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.212 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.213 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.213 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.214 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.214 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.214 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.215 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.215 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.216 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.216 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.216 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.217 189463 DEBUG nova.virt.hardware [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.222 189463 DEBUG nova.virt.libvirt.vif [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-391861091',display_name='tempest-ServersTestJSON-server-391861091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-391861091',id=6,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkpODYtio69+R9WJkG3kbtny62qtQXK5jGx/Nq50/0k24K19mKnFLIrlXUjE+bw7ZtG0AIg3JToi+QXrcu4bcyH71HrNaVVbbFaHlPCeJU82M33Tc/eG7K2cMzqKVw7cg==',key_name='tempest-keypair-469695820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e60fbd301d4ffb8e3a4b2b966f6692',ramdisk_id='',reservation_id='r-2p9ifg95',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-990543652',owner_user_name='tempest-ServersTestJSON-990543652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='034f84ff036e4d7ca94cfd14dd7f4967',uuid=02b43864-1632-4352-92f8-bbf244d2c94b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.223 189463 DEBUG nova.network.os_vif_util [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converting VIF {"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.225 189463 DEBUG nova.network.os_vif_util [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.226 189463 DEBUG nova.objects.instance [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02b43864-1632-4352-92f8-bbf244d2c94b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.241 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <uuid>02b43864-1632-4352-92f8-bbf244d2c94b</uuid>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <name>instance-00000006</name>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:name>tempest-ServersTestJSON-server-391861091</nova:name>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:13:11</nova:creationTime>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:user uuid="034f84ff036e4d7ca94cfd14dd7f4967">tempest-ServersTestJSON-990543652-project-member</nova:user>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:project uuid="e3e60fbd301d4ffb8e3a4b2b966f6692">tempest-ServersTestJSON-990543652</nova:project>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        <nova:port uuid="59ab5bcf-4e2c-416f-9177-0f4f749195df">
Dec  2 17:13:11 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <system>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="serial">02b43864-1632-4352-92f8-bbf244d2c94b</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="uuid">02b43864-1632-4352-92f8-bbf244d2c94b</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </system>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <os>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </os>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <features>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </features>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.config"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:d5:06:ff"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <target dev="tap59ab5bcf-4e"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/console.log" append="off"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <video>
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </video>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:13:11 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:13:11 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:13:11 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:13:11 compute-0 nova_compute[189459]: </domain>
Dec  2 17:13:11 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.242 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Preparing to wait for external event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.242 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.242 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.243 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.244 189463 DEBUG nova.virt.libvirt.vif [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-391861091',display_name='tempest-ServersTestJSON-server-391861091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-391861091',id=6,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkpODYtio69+R9WJkG3kbtny62qtQXK5jGx/Nq50/0k24K19mKnFLIrlXUjE+bw7ZtG0AIg3JToi+QXrcu4bcyH71HrNaVVbbFaHlPCeJU82M33Tc/eG7K2cMzqKVw7cg==',key_name='tempest-keypair-469695820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e60fbd301d4ffb8e3a4b2b966f6692',ramdisk_id='',reservation_id='r-2p9ifg95',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-990543652',owner_user_name='tempest-ServersTestJSON-990543652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='034f84ff036e4d7ca94cfd14dd7f4967',uuid=02b43864-1632-4352-92f8-bbf244d2c94b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.245 189463 DEBUG nova.network.os_vif_util [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converting VIF {"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.246 189463 DEBUG nova.network.os_vif_util [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.247 189463 DEBUG os_vif [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.248 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.249 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.250 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.255 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.256 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59ab5bcf-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.257 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59ab5bcf-4e, col_values=(('external_ids', {'iface-id': '59ab5bcf-4e2c-416f-9177-0f4f749195df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:06:ff', 'vm-uuid': '02b43864-1632-4352-92f8-bbf244d2c94b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.260 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:11 compute-0 NetworkManager[56503]: <info>  [1764695591.2624] manager: (tap59ab5bcf-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.264 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.276 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.277 189463 INFO os_vif [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e')#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.342 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.343 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.344 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] No VIF found with MAC fa:16:3e:d5:06:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.345 189463 INFO nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Using config drive#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.845 189463 INFO nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Creating config drive at /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.config#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.853 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp772n3vfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:11 compute-0 nova_compute[189459]: 2025-12-02 17:13:11.985 189463 DEBUG oslo_concurrency.processutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp772n3vfm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:12 compute-0 kernel: tap59ab5bcf-4e: entered promiscuous mode
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.0913] manager: (tap59ab5bcf-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.092 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 ovn_controller[97975]: 2025-12-02T17:13:12Z|00066|binding|INFO|Claiming lport 59ab5bcf-4e2c-416f-9177-0f4f749195df for this chassis.
Dec  2 17:13:12 compute-0 ovn_controller[97975]: 2025-12-02T17:13:12Z|00067|binding|INFO|59ab5bcf-4e2c-416f-9177-0f4f749195df: Claiming fa:16:3e:d5:06:ff 10.100.0.13
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.097 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Successfully updated port: 5f7c429b-020f-4314-b208-6820880dcf81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.111 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:06:ff 10.100.0.13'], port_security=['fa:16:3e:d5:06:ff 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '02b43864-1632-4352-92f8-bbf244d2c94b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e60fbd301d4ffb8e3a4b2b966f6692', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b257bca9-fecb-4f43-a96b-4230babdd266', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76bf3293-a11a-4d43-abac-9749314c9357, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=59ab5bcf-4e2c-416f-9177-0f4f749195df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.113 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 59ab5bcf-4e2c-416f-9177-0f4f749195df in datapath 316d7fe5-27ca-4684-94eb-0f18d776a0e1 bound to our chassis#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.115 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 316d7fe5-27ca-4684-94eb-0f18d776a0e1#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.122 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.126 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.126 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.126 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.129 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[68bcd8db-0333-4be4-9584-71adc2a66580]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.130 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap316d7fe5-21 in ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.132 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap316d7fe5-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.132 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[382cd579-55ca-4248-b9fa-18055d84d358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.133 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e00388-cba9-4b39-a839-33f6441e18b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 systemd-udevd[250748]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.148 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdb6baa-eacf-46c3-a438-f3e08d7a8f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 systemd-machined[155878]: New machine qemu-6-instance-00000006.
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.1726] device (tap59ab5bcf-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.1736] device (tap59ab5bcf-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.181 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c01994-8944-42a6-8df0-41f34081cb14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.219 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[08f39631-eb3d-4ffe-bcec-f19c4f86d8fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.2453] manager: (tap316d7fe5-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.243 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f24b090b-ce01-4b28-800c-69c0f3270ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.285 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[d4eb28a6-8903-4a19-a9af-4e84b49d6b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.291 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[e61af9a0-5122-47d2-9acb-f2c23cf251ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.3217] device (tap316d7fe5-20): carrier: link connected
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.327 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[11b9583d-a5bb-4d87-9692-2a8b762356e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.340 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.345 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.351 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f0e5ef-5c75-4d25-aa74-1d2d185a7be2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap316d7fe5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:49:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514085, 'reachable_time': 42688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250779, 'error': None, 'target': 'ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_controller[97975]: 2025-12-02T17:13:12Z|00068|binding|INFO|Setting lport 59ab5bcf-4e2c-416f-9177-0f4f749195df ovn-installed in OVS
Dec  2 17:13:12 compute-0 ovn_controller[97975]: 2025-12-02T17:13:12Z|00069|binding|INFO|Setting lport 59ab5bcf-4e2c-416f-9177-0f4f749195df up in Southbound
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.363 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.371 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d55db405-4de1-4868-b2c7-112147d6d939]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:496f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514085, 'tstamp': 514085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250781, 'error': None, 'target': 'ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.392 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c50ef1b5-c787-4962-b48f-58bf8364bc53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap316d7fe5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:49:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514085, 'reachable_time': 42688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250782, 'error': None, 'target': 'ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.397 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.438 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0febb0d2-fcad-46cc-a144-b1a1cbc37713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.520 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a970d0a6-6180-46c3-af72-e48eae673f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.524 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap316d7fe5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.525 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.526 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap316d7fe5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:12 compute-0 kernel: tap316d7fe5-20: entered promiscuous mode
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.529 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 NetworkManager[56503]: <info>  [1764695592.5305] manager: (tap316d7fe5-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.534 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.537 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap316d7fe5-20, col_values=(('external_ids', {'iface-id': '5aa1f54e-c5c6-42d0-bb32-41046ce9e71a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.541 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.542 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 ovn_controller[97975]: 2025-12-02T17:13:12Z|00070|binding|INFO|Releasing lport 5aa1f54e-c5c6-42d0-bb32-41046ce9e71a from this chassis (sb_readonly=0)
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.545 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/316d7fe5-27ca-4684-94eb-0f18d776a0e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/316d7fe5-27ca-4684-94eb-0f18d776a0e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.547 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0f79779b-077b-4ec4-94a2-489f57203f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.550 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-316d7fe5-27ca-4684-94eb-0f18d776a0e1
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/316d7fe5-27ca-4684-94eb-0f18d776a0e1.pid.haproxy
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 316d7fe5-27ca-4684-94eb-0f18d776a0e1
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:13:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:12.551 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'env', 'PROCESS_TAG=haproxy-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/316d7fe5-27ca-4684-94eb-0f18d776a0e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.556 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.584 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695592.5839293, 02b43864-1632-4352-92f8-bbf244d2c94b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.585 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] VM Started (Lifecycle Event)#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.796 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.806 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695592.5841346, 02b43864-1632-4352-92f8-bbf244d2c94b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.806 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.824 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.830 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.847 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Successfully updated port: 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.860 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.863 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.864 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquired lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.865 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.967 189463 DEBUG nova.compute.manager [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-changed-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.967 189463 DEBUG nova.compute.manager [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Refreshing instance network info cache due to event network-changed-5f7c429b-020f-4314-b208-6820880dcf81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:12 compute-0 nova_compute[189459]: 2025-12-02 17:13:12.967 189463 DEBUG oslo_concurrency.lockutils [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:12 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  2 17:13:13 compute-0 podman[250821]: 2025-12-02 17:13:13.022565043 +0000 UTC m=+0.080533620 container create 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:13:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  2 17:13:13 compute-0 podman[250821]: 2025-12-02 17:13:12.986632003 +0000 UTC m=+0.044600600 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:13:13 compute-0 systemd[1]: Started libpod-conmon-47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51.scope.
Dec  2 17:13:13 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb9f3e8de4e2fbdade70583e6ab6f6758fc5a63e9aa30397fba72f43d6a535e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:13:13 compute-0 podman[250821]: 2025-12-02 17:13:13.143015037 +0000 UTC m=+0.200983654 container init 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 17:13:13 compute-0 podman[250821]: 2025-12-02 17:13:13.150153156 +0000 UTC m=+0.208121743 container start 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  2 17:13:13 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [NOTICE]   (250859) : New worker (250861) forked
Dec  2 17:13:13 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [NOTICE]   (250859) : Loading success.
Dec  2 17:13:13 compute-0 nova_compute[189459]: 2025-12-02 17:13:13.271 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:13 compute-0 nova_compute[189459]: 2025-12-02 17:13:13.669 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:13:13 compute-0 nova_compute[189459]: 2025-12-02 17:13:13.750 189463 DEBUG nova.network.neutron [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updated VIF entry in instance network info cache for port 59ab5bcf-4e2c-416f-9177-0f4f749195df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:13 compute-0 nova_compute[189459]: 2025-12-02 17:13:13.751 189463 DEBUG nova.network.neutron [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updating instance_info_cache with network_info: [{"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:13 compute-0 nova_compute[189459]: 2025-12-02 17:13:13.774 189463 DEBUG oslo_concurrency.lockutils [req-62a1f289-c87a-4c15-ba7b-318858afa156 req-bbb1838f-1309-4e83-b12c-3ead96a91fd8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.576 189463 DEBUG nova.network.neutron [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.597 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.598 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance network_info: |[{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.598 189463 DEBUG oslo_concurrency.lockutils [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.598 189463 DEBUG nova.network.neutron [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Refreshing network info cache for port 5f7c429b-020f-4314-b208-6820880dcf81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.601 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start _get_guest_xml network_info=[{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.610 189463 WARNING nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.620 189463 DEBUG nova.virt.libvirt.host [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.621 189463 DEBUG nova.virt.libvirt.host [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.627 189463 DEBUG nova.virt.libvirt.host [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.628 189463 DEBUG nova.virt.libvirt.host [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.628 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.629 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.629 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.629 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.629 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.630 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.630 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.630 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.630 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.631 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.631 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.631 189463 DEBUG nova.virt.hardware [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.634 189463 DEBUG nova.virt.libvirt.vif [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.635 189463 DEBUG nova.network.os_vif_util [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.636 189463 DEBUG nova.network.os_vif_util [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.637 189463 DEBUG nova.objects.instance [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.659 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <uuid>4994ed6b-5e0c-4061-a84c-f46ccf29489f</uuid>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <name>instance-00000007</name>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:name>tempest-ServerActionsTestJSON-server-254489110</nova:name>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:13:14</nova:creationTime>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:user uuid="c800961435cb4a418a6ee67240a574fe">tempest-ServerActionsTestJSON-897427034-project-member</nova:user>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:project uuid="95abfdbd702a49dc89fc01dd45a4e014">tempest-ServerActionsTestJSON-897427034</nova:project>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        <nova:port uuid="5f7c429b-020f-4314-b208-6820880dcf81">
Dec  2 17:13:14 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <system>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="serial">4994ed6b-5e0c-4061-a84c-f46ccf29489f</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="uuid">4994ed6b-5e0c-4061-a84c-f46ccf29489f</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </system>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <os>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </os>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <features>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </features>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:df:76:b9"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <target dev="tap5f7c429b-02"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/console.log" append="off"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <video>
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </video>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:13:14 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:13:14 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:13:14 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:13:14 compute-0 nova_compute[189459]: </domain>
Dec  2 17:13:14 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.660 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Preparing to wait for external event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.660 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.660 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.661 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.661 189463 DEBUG nova.virt.libvirt.vif [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.661 189463 DEBUG nova.network.os_vif_util [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.662 189463 DEBUG nova.network.os_vif_util [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.662 189463 DEBUG os_vif [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.663 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.663 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.663 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.669 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.670 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f7c429b-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.670 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f7c429b-02, col_values=(('external_ids', {'iface-id': '5f7c429b-020f-4314-b208-6820880dcf81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:76:b9', 'vm-uuid': '4994ed6b-5e0c-4061-a84c-f46ccf29489f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.673 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.674 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:13:14 compute-0 NetworkManager[56503]: <info>  [1764695594.6750] manager: (tap5f7c429b-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.684 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.687 189463 INFO os_vif [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02')#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.750 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.750 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.750 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] No VIF found with MAC fa:16:3e:df:76:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:13:14 compute-0 nova_compute[189459]: 2025-12-02 17:13:14.751 189463 INFO nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Using config drive#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.326 189463 DEBUG nova.network.neutron [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updating instance_info_cache with network_info: [{"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.349 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Releasing lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.350 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Instance network_info: |[{"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.352 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Start _get_guest_xml network_info=[{"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.362 189463 WARNING nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.368 189463 DEBUG nova.virt.libvirt.host [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.369 189463 DEBUG nova.virt.libvirt.host [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.378 189463 DEBUG nova.virt.libvirt.host [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.378 189463 DEBUG nova.virt.libvirt.host [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.379 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.379 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.379 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.380 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.381 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.381 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.381 189463 DEBUG nova.virt.hardware [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.384 189463 DEBUG nova.virt.libvirt.vif [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-989242832',display_name='tempest-ServersTestManualDisk-server-989242832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-989242832',id=8,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYvpDPzRufvp6RMijDP+G7rZjD9/d30s67VD9E/xrEqX/ardgw/RHDIaqyL7Q+HbP7FG/4nBmOMRsGDgodwWMwm3ui910slic5w3Inq1LGCFejBlV1zRvHitI75bBUPZA==',key_name='tempest-keypair-377297279',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed6ce0cd7d04a178c199ead64cc2506',ramdisk_id='',reservation_id='r-wt2kddjl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1707151510',owner_user_name='tempest-ServersTestManualDisk-1707151510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='81bb015501444821b1071aa660223a05',uuid=69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.385 189463 DEBUG nova.network.os_vif_util [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converting VIF {"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.385 189463 DEBUG nova.network.os_vif_util [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.386 189463 DEBUG nova.objects.instance [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.419 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <uuid>69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7</uuid>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <name>instance-00000008</name>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:name>tempest-ServersTestManualDisk-server-989242832</nova:name>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:13:15</nova:creationTime>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:user uuid="81bb015501444821b1071aa660223a05">tempest-ServersTestManualDisk-1707151510-project-member</nova:user>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:project uuid="6ed6ce0cd7d04a178c199ead64cc2506">tempest-ServersTestManualDisk-1707151510</nova:project>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        <nova:port uuid="6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3">
Dec  2 17:13:15 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <system>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="serial">69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="uuid">69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </system>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <os>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </os>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <features>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </features>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.config"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:ac:e1:d1"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <target dev="tap6b0a3d63-0e"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/console.log" append="off"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <video>
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </video>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:13:15 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:13:15 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:13:15 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:13:15 compute-0 nova_compute[189459]: </domain>
Dec  2 17:13:15 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.420 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Preparing to wait for external event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.420 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.420 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.421 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.421 189463 DEBUG nova.virt.libvirt.vif [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-989242832',display_name='tempest-ServersTestManualDisk-server-989242832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-989242832',id=8,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYvpDPzRufvp6RMijDP+G7rZjD9/d30s67VD9E/xrEqX/ardgw/RHDIaqyL7Q+HbP7FG/4nBmOMRsGDgodwWMwm3ui910slic5w3Inq1LGCFejBlV1zRvHitI75bBUPZA==',key_name='tempest-keypair-377297279',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed6ce0cd7d04a178c199ead64cc2506',ramdisk_id='',reservation_id='r-wt2kddjl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1707151510',owner_user_name='tempest-ServersTestManualDisk-1707151510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='81bb015501444821b1071aa660223a05',uuid=69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.422 189463 DEBUG nova.network.os_vif_util [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converting VIF {"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.422 189463 DEBUG nova.network.os_vif_util [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.423 189463 DEBUG os_vif [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.423 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.424 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.424 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.428 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.428 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b0a3d63-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.428 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b0a3d63-0e, col_values=(('external_ids', {'iface-id': '6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:e1:d1', 'vm-uuid': '69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.430 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 NetworkManager[56503]: <info>  [1764695595.4329] manager: (tap6b0a3d63-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.432 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.443 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.444 189463 INFO os_vif [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e')#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.497 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.498 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.498 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] No VIF found with MAC fa:16:3e:ac:e1:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.498 189463 INFO nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Using config drive#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.544 189463 DEBUG nova.compute.manager [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-changed-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.545 189463 DEBUG nova.compute.manager [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Refreshing instance network info cache due to event network-changed-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.545 189463 DEBUG oslo_concurrency.lockutils [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.546 189463 DEBUG oslo_concurrency.lockutils [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.546 189463 DEBUG nova.network.neutron [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Refreshing network info cache for port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.566 189463 INFO nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Creating config drive at /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.575 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa35_61r9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.707 189463 DEBUG oslo_concurrency.processutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa35_61r9" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:15 compute-0 NetworkManager[56503]: <info>  [1764695595.7858] manager: (tap5f7c429b-02): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Dec  2 17:13:15 compute-0 kernel: tap5f7c429b-02: entered promiscuous mode
Dec  2 17:13:15 compute-0 ovn_controller[97975]: 2025-12-02T17:13:15Z|00071|binding|INFO|Claiming lport 5f7c429b-020f-4314-b208-6820880dcf81 for this chassis.
Dec  2 17:13:15 compute-0 ovn_controller[97975]: 2025-12-02T17:13:15Z|00072|binding|INFO|5f7c429b-020f-4314-b208-6820880dcf81: Claiming fa:16:3e:df:76:b9 10.100.0.5
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.795 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.813 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:b9 10.100.0.5'], port_security=['fa:16:3e:df:76:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4994ed6b-5e0c-4061-a84c-f46ccf29489f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95abfdbd702a49dc89fc01dd45a4e014', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8a6a28c-4df2-4758-a58f-e25b3a4dbf0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ba938b6-3c05-41dd-ab92-658c8cac6fe8, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=5f7c429b-020f-4314-b208-6820880dcf81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.814 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 5f7c429b-020f-4314-b208-6820880dcf81 in datapath 5882ec1f-b595-4c00-871f-f9ec4c7212bd bound to our chassis#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.817 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5882ec1f-b595-4c00-871f-f9ec4c7212bd#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.839 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[170b42c9-690d-4140-8d00-7f7e5fa0ffd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.841 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5882ec1f-b1 in ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.843 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5882ec1f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.844 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[8da758aa-d3d3-4dc9-bb74-f830b409462f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.844 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[62593116-85b5-46b7-b0cb-5993795ac4a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 systemd-machined[155878]: New machine qemu-7-instance-00000007.
Dec  2 17:13:15 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.861 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[6f79b7ad-e317-4a94-9701-3cbce5b78f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.867 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 ovn_controller[97975]: 2025-12-02T17:13:15Z|00073|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 ovn-installed in OVS
Dec  2 17:13:15 compute-0 ovn_controller[97975]: 2025-12-02T17:13:15Z|00074|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 up in Southbound
Dec  2 17:13:15 compute-0 nova_compute[189459]: 2025-12-02 17:13:15.871 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:15 compute-0 systemd-udevd[250898]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.893 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb76681-70ba-4662-8ce4-0b968a6b497c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 NetworkManager[56503]: <info>  [1764695595.8970] device (tap5f7c429b-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:13:15 compute-0 NetworkManager[56503]: <info>  [1764695595.8978] device (tap5f7c429b-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.920 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[68e52039-fb23-4897-a700-f359e2c6962e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.927 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f9098941-3d52-4e22-abe4-93861274bed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 systemd-udevd[250900]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:13:15 compute-0 NetworkManager[56503]: <info>  [1764695595.9287] manager: (tap5882ec1f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.965 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa37932-93b5-4cc6-a53a-a97c0b14e0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:15 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:15.973 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[d634d436-ab2e-4a6f-aff2-12fd8e9c5854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 NetworkManager[56503]: <info>  [1764695596.0047] device (tap5882ec1f-b0): carrier: link connected
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.012 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[900d3db3-100e-4b6e-9547-b21966773022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.034 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[271f1ed4-20cf-49fd-b936-e0bcf551787d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5882ec1f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:3e:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514453, 'reachable_time': 16644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250931, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.053 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[6b71b7ee-b2a8-4e3a-9f15-3b176beb212e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3ee1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514453, 'tstamp': 514453}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250932, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.073 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d727bb8a-3db4-423c-b0e9-7c1034bac0f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5882ec1f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:3e:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514453, 'reachable_time': 16644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250933, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.115 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[158c636a-86cb-448d-9b54-13dbc209779d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.222 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[45c5bc95-c9b9-4602-99aa-78bd5c4c291f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.227 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5882ec1f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.228 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.228 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5882ec1f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:16 compute-0 NetworkManager[56503]: <info>  [1764695596.2320] manager: (tap5882ec1f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.231 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 kernel: tap5882ec1f-b0: entered promiscuous mode
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.244 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.245 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5882ec1f-b0, col_values=(('external_ids', {'iface-id': '2b400733-be6e-4881-b4c2-791cab786045'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.247 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 ovn_controller[97975]: 2025-12-02T17:13:16Z|00075|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.274 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.276 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.277 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e35271-882a-48da-9d5d-6e1fecdd9fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.279 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-5882ec1f-b595-4c00-871f-f9ec4c7212bd
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 5882ec1f-b595-4c00-871f-f9ec4c7212bd
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.279 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.282 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'env', 'PROCESS_TAG=haproxy-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5882ec1f-b595-4c00-871f-f9ec4c7212bd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.338 189463 INFO nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Creating config drive at /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.config#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.347 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tr8ya6k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.479 189463 DEBUG oslo_concurrency.processutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7tr8ya6k" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:16 compute-0 kernel: tap6b0a3d63-0e: entered promiscuous mode
Dec  2 17:13:16 compute-0 systemd-udevd[250921]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:13:16 compute-0 NetworkManager[56503]: <info>  [1764695596.5770] manager: (tap6b0a3d63-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec  2 17:13:16 compute-0 ovn_controller[97975]: 2025-12-02T17:13:16Z|00076|binding|INFO|Claiming lport 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 for this chassis.
Dec  2 17:13:16 compute-0 ovn_controller[97975]: 2025-12-02T17:13:16Z|00077|binding|INFO|6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3: Claiming fa:16:3e:ac:e1:d1 10.100.0.13
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.577 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.584 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:16.592 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:e1:d1 10.100.0.13'], port_security=['fa:16:3e:ac:e1:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edaab37c-02f3-41cd-b2e4-fec066644901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed6ce0cd7d04a178c199ead64cc2506', 'neutron:revision_number': '2', 'neutron:security_group_ids': '65edcf91-3c96-4166-b10c-35c71191e696', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb180a38-b9a4-443e-9211-3ed8c856b079, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:16 compute-0 NetworkManager[56503]: <info>  [1764695596.5933] device (tap6b0a3d63-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:13:16 compute-0 NetworkManager[56503]: <info>  [1764695596.5941] device (tap6b0a3d63-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:13:16 compute-0 systemd-machined[155878]: New machine qemu-8-instance-00000008.
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.647 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  2 17:13:16 compute-0 ovn_controller[97975]: 2025-12-02T17:13:16Z|00078|binding|INFO|Setting lport 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 ovn-installed in OVS
Dec  2 17:13:16 compute-0 ovn_controller[97975]: 2025-12-02T17:13:16Z|00079|binding|INFO|Setting lport 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 up in Southbound
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.670 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.673 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695596.6731849, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.674 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Started (Lifecycle Event)#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.693 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.709 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695596.6739187, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.710 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.732 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.744 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:16 compute-0 nova_compute[189459]: 2025-12-02 17:13:16.774 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:16 compute-0 podman[250999]: 2025-12-02 17:13:16.853281032 +0000 UTC m=+0.100325473 container create 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:13:16 compute-0 podman[250999]: 2025-12-02 17:13:16.793126012 +0000 UTC m=+0.040170533 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:13:16 compute-0 systemd[1]: Started libpod-conmon-41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604.scope.
Dec  2 17:13:16 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196be997edee2b9b798db254bad0a6dbf221517b2e2f1915f34f1f7ed6d787e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:13:16 compute-0 podman[250999]: 2025-12-02 17:13:16.987889971 +0000 UTC m=+0.234934412 container init 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:13:17 compute-0 podman[250999]: 2025-12-02 17:13:17.004056118 +0000 UTC m=+0.251100549 container start 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.012 189463 DEBUG nova.network.neutron [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updated VIF entry in instance network info cache for port 5f7c429b-020f-4314-b208-6820880dcf81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.013 189463 DEBUG nova.network.neutron [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:17 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [NOTICE]   (251019) : New worker (251021) forked
Dec  2 17:13:17 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [NOTICE]   (251019) : Loading success.
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.044 189463 DEBUG oslo_concurrency.lockutils [req-0898d927-3342-43e7-a590-384ca383e4cd req-ef16d26d-e384-44b3-8f4d-acb0434ad248 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.110 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 in datapath edaab37c-02f3-41cd-b2e4-fec066644901 unbound from our chassis#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.114 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network edaab37c-02f3-41cd-b2e4-fec066644901#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.128 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[52117a64-7ccb-4b00-b263-160e45843e83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.130 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapedaab37c-01 in ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.132 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapedaab37c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.133 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b97f5f-d3ec-4c74-a7ff-97067b70044f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.134 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f9256264-01e8-499d-9637-09509d8f0636]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.157 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f3d336-99e8-478c-bbb9-3d459a0cf9a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.176 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3aefe6-4d46-4b3c-81dd-0dc691108488]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.212 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[97d54aac-3922-4306-acbb-5efe88c60c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 NetworkManager[56503]: <info>  [1764695597.2252] manager: (tapedaab37c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.224 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[655b4a95-1ff1-4a69-8d3e-d76a6727cd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.276 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fddce8-7b8e-48c4-96a8-c4f0ace3a1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.282 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[c8055601-bfba-4827-bf79-bd28cd055bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 NetworkManager[56503]: <info>  [1764695597.3225] device (tapedaab37c-00): carrier: link connected
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.333 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ab367e-488f-4b89-ac36-02aa0f25030c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.356 189463 DEBUG nova.compute.manager [req-a646e39d-ad89-43df-891a-5f475adf5155 req-b3c26393-c4a6-4d9f-b54c-98227ca60bba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.357 189463 DEBUG oslo_concurrency.lockutils [req-a646e39d-ad89-43df-891a-5f475adf5155 req-b3c26393-c4a6-4d9f-b54c-98227ca60bba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.357 189463 DEBUG oslo_concurrency.lockutils [req-a646e39d-ad89-43df-891a-5f475adf5155 req-b3c26393-c4a6-4d9f-b54c-98227ca60bba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.357 189463 DEBUG oslo_concurrency.lockutils [req-a646e39d-ad89-43df-891a-5f475adf5155 req-b3c26393-c4a6-4d9f-b54c-98227ca60bba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.357 189463 DEBUG nova.compute.manager [req-a646e39d-ad89-43df-891a-5f475adf5155 req-b3c26393-c4a6-4d9f-b54c-98227ca60bba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Processing event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.358 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.365 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695597.3651402, 02b43864-1632-4352-92f8-bbf244d2c94b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.366 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.367 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c5ab43-8102-4c66-a15e-d51e50d870f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedaab37c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:f6:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514585, 'reachable_time': 17266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251040, 'error': None, 'target': 'ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.369 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.376 189463 INFO nova.virt.libvirt.driver [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Instance spawned successfully.#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.376 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.392 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[488ca737-f461-46c6-993f-2d2d3a811347]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:f648'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514585, 'tstamp': 514585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251041, 'error': None, 'target': 'ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.417 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e98af78a-8d78-4d0e-ab8d-0a188d60aec7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedaab37c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:f6:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514585, 'reachable_time': 17266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251042, 'error': None, 'target': 'ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.471 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[458bd5c7-a337-4813-b430-24294ae7151f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.479 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.488 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.522 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.523 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.524 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.524 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.525 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.525 189463 DEBUG nova.virt.libvirt.driver [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.562 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[95309288-0fde-4b33-ba8b-aec87d2ec6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.565 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedaab37c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.566 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.568 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedaab37c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:17 compute-0 kernel: tapedaab37c-00: entered promiscuous mode
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.573 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.584 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:17 compute-0 NetworkManager[56503]: <info>  [1764695597.5875] manager: (tapedaab37c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.588 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapedaab37c-00, col_values=(('external_ids', {'iface-id': '99056b26-1c1e-4c28-90a1-102f47c362a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.590 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:17 compute-0 ovn_controller[97975]: 2025-12-02T17:13:17Z|00080|binding|INFO|Releasing lport 99056b26-1c1e-4c28-90a1-102f47c362a9 from this chassis (sb_readonly=0)
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.592 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/edaab37c-02f3-41cd-b2e4-fec066644901.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/edaab37c-02f3-41cd-b2e4-fec066644901.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.592 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.595 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[87d37cf2-f27e-49e4-8906-33ee6cdfd364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.595 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-edaab37c-02f3-41cd-b2e4-fec066644901
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/edaab37c-02f3-41cd-b2e4-fec066644901.pid.haproxy
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID edaab37c-02f3-41cd-b2e4-fec066644901
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:13:17 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:17.596 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901', 'env', 'PROCESS_TAG=haproxy-edaab37c-02f3-41cd-b2e4-fec066644901', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/edaab37c-02f3-41cd-b2e4-fec066644901.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.614 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.665 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.702 189463 INFO nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Took 14.98 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.703 189463 DEBUG nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.774 189463 INFO nova.compute.manager [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Took 15.49 seconds to build instance.#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.791 189463 DEBUG oslo_concurrency.lockutils [None req-d0a83160-423e-4154-9117-ea52880a6f9b 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.986 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695597.9857855, 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:17 compute-0 nova_compute[189459]: 2025-12-02 17:13:17.989 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] VM Started (Lifecycle Event)#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.008 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.015 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695597.9859126, 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.015 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.042 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.049 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.074 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:18 compute-0 podman[251078]: 2025-12-02 17:13:18.102947709 +0000 UTC m=+0.117640281 container create dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:13:18 compute-0 podman[251078]: 2025-12-02 17:13:18.055904925 +0000 UTC m=+0.070597487 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:13:18 compute-0 systemd[1]: Started libpod-conmon-dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3.scope.
Dec  2 17:13:18 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d352cb98f1315b86953021cc35ebfc9699d2470287be91bb5d9851722221586/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:13:18 compute-0 podman[251078]: 2025-12-02 17:13:18.233963163 +0000 UTC m=+0.248655715 container init dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:13:18 compute-0 podman[251078]: 2025-12-02 17:13:18.250014107 +0000 UTC m=+0.264706659 container start dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.274 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:18 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [NOTICE]   (251095) : New worker (251097) forked
Dec  2 17:13:18 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [NOTICE]   (251095) : Loading success.
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.535 189463 DEBUG nova.network.neutron [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updated VIF entry in instance network info cache for port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.536 189463 DEBUG nova.network.neutron [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updating instance_info_cache with network_info: [{"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:18 compute-0 nova_compute[189459]: 2025-12-02 17:13:18.555 189463 DEBUG oslo_concurrency.lockutils [req-c02b1b42-7570-4c89-8ebb-4eca1be6bf99 req-30826408-f45c-4d53-ba52-6e163890db8e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.631 189463 DEBUG nova.compute.manager [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.632 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.633 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.633 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.633 189463 DEBUG nova.compute.manager [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] No waiting events found dispatching network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.634 189463 WARNING nova.compute.manager [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received unexpected event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df for instance with vm_state active and task_state None.#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.634 189463 DEBUG nova.compute.manager [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.635 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.635 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.636 189463 DEBUG oslo_concurrency.lockutils [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.636 189463 DEBUG nova.compute.manager [req-14d6c73c-7cb5-4a71-9eda-cbfd7e9eb41e req-f5ba0dff-b6bc-4ac5-a3aa-de6feffebbe0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Processing event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.637 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.642 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695599.6415534, 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.642 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.646 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.652 189463 INFO nova.virt.libvirt.driver [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Instance spawned successfully.#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.653 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.661 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.669 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.698 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.698 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.699 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.700 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.700 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.701 189463 DEBUG nova.virt.libvirt.driver [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.708 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.767 189463 INFO nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Took 12.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.768 189463 DEBUG nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.843 189463 INFO nova.compute.manager [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Took 13.92 seconds to build instance.#033[00m
Dec  2 17:13:19 compute-0 nova_compute[189459]: 2025-12-02 17:13:19.876 189463 DEBUG oslo_concurrency.lockutils [None req-1801cfe6-3249-4cf9-ae7f-855ccee50c52 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:20 compute-0 podman[251107]: 2025-12-02 17:13:20.305840424 +0000 UTC m=+0.122908980 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, io.openshift.expose-services=)
Dec  2 17:13:20 compute-0 nova_compute[189459]: 2025-12-02 17:13:20.431 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.265 189463 DEBUG nova.compute.manager [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.265 189463 DEBUG oslo_concurrency.lockutils [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.265 189463 DEBUG oslo_concurrency.lockutils [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.266 189463 DEBUG oslo_concurrency.lockutils [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.266 189463 DEBUG nova.compute.manager [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] No waiting events found dispatching network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.266 189463 WARNING nova.compute.manager [req-7a7ec198-6368-45bf-aa0b-cb9af00c0b81 req-3bc136fe-05c7-4fa6-b8b2-80e7654b3e49 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received unexpected event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.405 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:22 compute-0 NetworkManager[56503]: <info>  [1764695602.4057] manager: (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  2 17:13:22 compute-0 NetworkManager[56503]: <info>  [1764695602.4063] manager: (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.510 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:22 compute-0 ovn_controller[97975]: 2025-12-02T17:13:22Z|00081|binding|INFO|Releasing lport 99056b26-1c1e-4c28-90a1-102f47c362a9 from this chassis (sb_readonly=0)
Dec  2 17:13:22 compute-0 ovn_controller[97975]: 2025-12-02T17:13:22Z|00082|binding|INFO|Releasing lport 5aa1f54e-c5c6-42d0-bb32-41046ce9e71a from this chassis (sb_readonly=0)
Dec  2 17:13:22 compute-0 ovn_controller[97975]: 2025-12-02T17:13:22Z|00083|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.531 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.851 189463 DEBUG nova.compute.manager [req-de359a60-da5f-4401-9fb8-c24a07812453 req-e0dcca83-ba11-4fb5-a6b9-c66f4d15f7e0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.852 189463 DEBUG oslo_concurrency.lockutils [req-de359a60-da5f-4401-9fb8-c24a07812453 req-e0dcca83-ba11-4fb5-a6b9-c66f4d15f7e0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.852 189463 DEBUG oslo_concurrency.lockutils [req-de359a60-da5f-4401-9fb8-c24a07812453 req-e0dcca83-ba11-4fb5-a6b9-c66f4d15f7e0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.853 189463 DEBUG oslo_concurrency.lockutils [req-de359a60-da5f-4401-9fb8-c24a07812453 req-e0dcca83-ba11-4fb5-a6b9-c66f4d15f7e0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.853 189463 DEBUG nova.compute.manager [req-de359a60-da5f-4401-9fb8-c24a07812453 req-e0dcca83-ba11-4fb5-a6b9-c66f4d15f7e0 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Processing event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.854 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.858 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695602.8585672, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.859 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.867 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.872 189463 INFO nova.virt.libvirt.driver [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance spawned successfully.#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.872 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.891 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.901 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.906 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.907 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.907 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.907 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.908 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.908 189463 DEBUG nova.virt.libvirt.driver [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:13:22 compute-0 nova_compute[189459]: 2025-12-02 17:13:22.941 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:13:23 compute-0 nova_compute[189459]: 2025-12-02 17:13:23.170 189463 INFO nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Took 16.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:13:23 compute-0 nova_compute[189459]: 2025-12-02 17:13:23.170 189463 DEBUG nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:23 compute-0 nova_compute[189459]: 2025-12-02 17:13:23.244 189463 INFO nova.compute.manager [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Took 17.53 seconds to build instance.#033[00m
Dec  2 17:13:23 compute-0 nova_compute[189459]: 2025-12-02 17:13:23.273 189463 DEBUG oslo_concurrency.lockutils [None req-826df910-faf1-4076-92f8-38d11e5f9429 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:23 compute-0 nova_compute[189459]: 2025-12-02 17:13:23.276 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.500 189463 DEBUG nova.compute.manager [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-changed-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.500 189463 DEBUG nova.compute.manager [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Refreshing instance network info cache due to event network-changed-59ab5bcf-4e2c-416f-9177-0f4f749195df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.500 189463 DEBUG oslo_concurrency.lockutils [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.501 189463 DEBUG oslo_concurrency.lockutils [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.501 189463 DEBUG nova.network.neutron [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Refreshing network info cache for port 59ab5bcf-4e2c-416f-9177-0f4f749195df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.835 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.836 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.837 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.837 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.837 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.838 189463 INFO nova.compute.manager [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Terminating instance#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.839 189463 DEBUG nova.compute.manager [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:13:24 compute-0 kernel: tap59ab5bcf-4e (unregistering): left promiscuous mode
Dec  2 17:13:24 compute-0 NetworkManager[56503]: <info>  [1764695604.8710] device (tap59ab5bcf-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.890 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:24 compute-0 ovn_controller[97975]: 2025-12-02T17:13:24Z|00084|binding|INFO|Releasing lport 59ab5bcf-4e2c-416f-9177-0f4f749195df from this chassis (sb_readonly=0)
Dec  2 17:13:24 compute-0 ovn_controller[97975]: 2025-12-02T17:13:24Z|00085|binding|INFO|Setting lport 59ab5bcf-4e2c-416f-9177-0f4f749195df down in Southbound
Dec  2 17:13:24 compute-0 ovn_controller[97975]: 2025-12-02T17:13:24Z|00086|binding|INFO|Removing iface tap59ab5bcf-4e ovn-installed in OVS
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.896 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:24 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:24.901 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:06:ff 10.100.0.13'], port_security=['fa:16:3e:d5:06:ff 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '02b43864-1632-4352-92f8-bbf244d2c94b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e60fbd301d4ffb8e3a4b2b966f6692', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b257bca9-fecb-4f43-a96b-4230babdd266', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76bf3293-a11a-4d43-abac-9749314c9357, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=59ab5bcf-4e2c-416f-9177-0f4f749195df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:24 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:24.902 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 59ab5bcf-4e2c-416f-9177-0f4f749195df in datapath 316d7fe5-27ca-4684-94eb-0f18d776a0e1 unbound from our chassis#033[00m
Dec  2 17:13:24 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:24.904 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 316d7fe5-27ca-4684-94eb-0f18d776a0e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:13:24 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:24.905 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2b224c9a-5218-45bf-9553-2471f1e06d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:24 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:24.906 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1 namespace which is not needed anymore#033[00m
Dec  2 17:13:24 compute-0 nova_compute[189459]: 2025-12-02 17:13:24.914 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:24 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  2 17:13:24 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 7.961s CPU time.
Dec  2 17:13:24 compute-0 systemd-machined[155878]: Machine qemu-6-instance-00000006 terminated.
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.076 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.089 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [NOTICE]   (250859) : haproxy version is 2.8.14-c23fe91
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [NOTICE]   (250859) : path to executable is /usr/sbin/haproxy
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [WARNING]  (250859) : Exiting Master process...
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [WARNING]  (250859) : Exiting Master process...
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [ALERT]    (250859) : Current worker (250861) exited with code 143 (Terminated)
Dec  2 17:13:25 compute-0 neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1[250855]: [WARNING]  (250859) : All workers exited. Exiting... (0)
Dec  2 17:13:25 compute-0 systemd[1]: libpod-47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51.scope: Deactivated successfully.
Dec  2 17:13:25 compute-0 podman[251153]: 2025-12-02 17:13:25.118086501 +0000 UTC m=+0.082987585 container died 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.133 189463 INFO nova.virt.libvirt.driver [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Instance destroyed successfully.#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.134 189463 DEBUG nova.objects.instance [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lazy-loading 'resources' on Instance uuid 02b43864-1632-4352-92f8-bbf244d2c94b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.153 189463 DEBUG nova.virt.libvirt.vif [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-391861091',display_name='tempest-ServersTestJSON-server-391861091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-391861091',id=6,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkpODYtio69+R9WJkG3kbtny62qtQXK5jGx/Nq50/0k24K19mKnFLIrlXUjE+bw7ZtG0AIg3JToi+QXrcu4bcyH71HrNaVVbbFaHlPCeJU82M33Tc/eG7K2cMzqKVw7cg==',key_name='tempest-keypair-469695820',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3e60fbd301d4ffb8e3a4b2b966f6692',ramdisk_id='',reservation_id='r-2p9ifg95',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-990543652',owner_user_name='tempest-ServersTestJSON-990543652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:13:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='034f84ff036e4d7ca94cfd14dd7f4967',uuid=02b43864-1632-4352-92f8-bbf244d2c94b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.153 189463 DEBUG nova.network.os_vif_util [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converting VIF {"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.154 189463 DEBUG nova.network.os_vif_util [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.155 189463 DEBUG os_vif [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.157 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.157 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59ab5bcf-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.159 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.161 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.163 189463 INFO os_vif [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:06:ff,bridge_name='br-int',has_traffic_filtering=True,id=59ab5bcf-4e2c-416f-9177-0f4f749195df,network=Network(316d7fe5-27ca-4684-94eb-0f18d776a0e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59ab5bcf-4e')#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.164 189463 INFO nova.virt.libvirt.driver [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Deleting instance files /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b_del#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.165 189463 INFO nova.virt.libvirt.driver [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Deletion of /var/lib/nova/instances/02b43864-1632-4352-92f8-bbf244d2c94b_del complete#033[00m
Dec  2 17:13:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51-userdata-shm.mount: Deactivated successfully.
Dec  2 17:13:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb9f3e8de4e2fbdade70583e6ab6f6758fc5a63e9aa30397fba72f43d6a535e-merged.mount: Deactivated successfully.
Dec  2 17:13:25 compute-0 podman[251153]: 2025-12-02 17:13:25.20731722 +0000 UTC m=+0.172218304 container cleanup 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:13:25 compute-0 systemd[1]: libpod-conmon-47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51.scope: Deactivated successfully.
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.241 189463 INFO nova.compute.manager [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.242 189463 DEBUG oslo.service.loopingcall [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.242 189463 DEBUG nova.compute.manager [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.242 189463 DEBUG nova.network.neutron [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:13:25 compute-0 podman[251199]: 2025-12-02 17:13:25.289570244 +0000 UTC m=+0.055439466 container remove 47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.303 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7d8789-ffa0-4473-a34e-8b344fcc496d]: (4, ('Tue Dec  2 05:13:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1 (47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51)\n47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51\nTue Dec  2 05:13:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1 (47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51)\n47a2fd0d57cbf55746f9c93ed6dd2c9d632432dd36fd179afaa4baf50b9def51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.306 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d34dd21b-627d-4669-b7f2-5529642f0856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.307 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap316d7fe5-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.310 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 kernel: tap316d7fe5-20: left promiscuous mode
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.324 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.327 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[401f9ada-27d3-4ae9-892c-0ca632cc34b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.354 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a260f2a4-ed96-4a7d-b0f6-3904f2aadc42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.357 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[de37bdc4-374d-4c5a-976f-8306484d76e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.378 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[96b8483b-7a07-4317-bdc1-84c052636fae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514074, 'reachable_time': 23818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251223, 'error': None, 'target': 'ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d316d7fe5\x2d27ca\x2d4684\x2d94eb\x2d0f18d776a0e1.mount: Deactivated successfully.
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.381 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-316d7fe5-27ca-4684-94eb-0f18d776a0e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:13:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:25.381 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[01ba5f25-da95-4ce3-ac88-bb0e08baea9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:25 compute-0 ovn_controller[97975]: 2025-12-02T17:13:25Z|00087|binding|INFO|Releasing lport 99056b26-1c1e-4c28-90a1-102f47c362a9 from this chassis (sb_readonly=0)
Dec  2 17:13:25 compute-0 ovn_controller[97975]: 2025-12-02T17:13:25Z|00088|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:13:25 compute-0 podman[251213]: 2025-12-02 17:13:25.463650426 +0000 UTC m=+0.108304454 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:13:25 compute-0 podman[251212]: 2025-12-02 17:13:25.465239408 +0000 UTC m=+0.111527849 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.471 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.571 189463 DEBUG nova.compute.manager [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.571 189463 DEBUG oslo_concurrency.lockutils [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.571 189463 DEBUG oslo_concurrency.lockutils [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.571 189463 DEBUG oslo_concurrency.lockutils [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.572 189463 DEBUG nova.compute.manager [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:25 compute-0 nova_compute[189459]: 2025-12-02 17:13:25.572 189463 WARNING nova.compute.manager [req-43275eb8-4b86-413f-b83f-7e107aec57a2 req-ce2b1477-2ae2-45b4-b0b3-e297bac1aeac b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:13:26 compute-0 ovn_controller[97975]: 2025-12-02T17:13:26Z|00089|binding|INFO|Releasing lport 99056b26-1c1e-4c28-90a1-102f47c362a9 from this chassis (sb_readonly=0)
Dec  2 17:13:26 compute-0 ovn_controller[97975]: 2025-12-02T17:13:26Z|00090|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.608 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.637 189463 DEBUG nova.compute.manager [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-unplugged-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.638 189463 DEBUG oslo_concurrency.lockutils [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.639 189463 DEBUG oslo_concurrency.lockutils [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.640 189463 DEBUG oslo_concurrency.lockutils [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.640 189463 DEBUG nova.compute.manager [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] No waiting events found dispatching network-vif-unplugged-59ab5bcf-4e2c-416f-9177-0f4f749195df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:26 compute-0 nova_compute[189459]: 2025-12-02 17:13:26.641 189463 DEBUG nova.compute.manager [req-c56c5719-ef10-4f96-84b8-fbff3378eaba req-161f059e-5ce7-4307-9585-155f12bf9763 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-unplugged-59ab5bcf-4e2c-416f-9177-0f4f749195df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.279 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.292 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.293 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.294 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.294 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.295 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.296 189463 INFO nova.compute.manager [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Terminating instance#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.297 189463 DEBUG nova.compute.manager [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:13:28 compute-0 kernel: tap6b0a3d63-0e (unregistering): left promiscuous mode
Dec  2 17:13:28 compute-0 NetworkManager[56503]: <info>  [1764695608.3282] device (tap6b0a3d63-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.334 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 ovn_controller[97975]: 2025-12-02T17:13:28Z|00091|binding|INFO|Releasing lport 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 from this chassis (sb_readonly=0)
Dec  2 17:13:28 compute-0 ovn_controller[97975]: 2025-12-02T17:13:28Z|00092|binding|INFO|Setting lport 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 down in Southbound
Dec  2 17:13:28 compute-0 ovn_controller[97975]: 2025-12-02T17:13:28Z|00093|binding|INFO|Removing iface tap6b0a3d63-0e ovn-installed in OVS
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.338 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:28.340 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:e1:d1 10.100.0.13'], port_security=['fa:16:3e:ac:e1:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edaab37c-02f3-41cd-b2e4-fec066644901', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed6ce0cd7d04a178c199ead64cc2506', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65edcf91-3c96-4166-b10c-35c71191e696', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb180a38-b9a4-443e-9211-3ed8c856b079, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:13:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:28.342 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 in datapath edaab37c-02f3-41cd-b2e4-fec066644901 unbound from our chassis#033[00m
Dec  2 17:13:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:28.343 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network edaab37c-02f3-41cd-b2e4-fec066644901, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:13:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:28.344 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[37915a05-737e-45f9-98dd-05665e2f7379]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:28.345 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901 namespace which is not needed anymore#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.349 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  2 17:13:28 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 10.284s CPU time.
Dec  2 17:13:28 compute-0 systemd-machined[155878]: Machine qemu-8-instance-00000008 terminated.
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:28 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [NOTICE]   (251095) : haproxy version is 2.8.14-c23fe91
Dec  2 17:13:28 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [NOTICE]   (251095) : path to executable is /usr/sbin/haproxy
Dec  2 17:13:28 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [WARNING]  (251095) : Exiting Master process...
Dec  2 17:13:28 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [ALERT]    (251095) : Current worker (251097) exited with code 143 (Terminated)
Dec  2 17:13:28 compute-0 neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901[251091]: [WARNING]  (251095) : All workers exited. Exiting... (0)
Dec  2 17:13:28 compute-0 systemd[1]: libpod-dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3.scope: Deactivated successfully.
Dec  2 17:13:28 compute-0 podman[251271]: 2025-12-02 17:13:28.520774394 +0000 UTC m=+0.065576094 container died dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.572 189463 INFO nova.virt.libvirt.driver [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Instance destroyed successfully.#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.574 189463 DEBUG nova.objects.instance [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lazy-loading 'resources' on Instance uuid 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.661 189463 DEBUG nova.virt.libvirt.vif [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-989242832',display_name='tempest-ServersTestManualDisk-server-989242832',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-989242832',id=8,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYvpDPzRufvp6RMijDP+G7rZjD9/d30s67VD9E/xrEqX/ardgw/RHDIaqyL7Q+HbP7FG/4nBmOMRsGDgodwWMwm3ui910slic5w3Inq1LGCFejBlV1zRvHitI75bBUPZA==',key_name='tempest-keypair-377297279',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed6ce0cd7d04a178c199ead64cc2506',ramdisk_id='',reservation_id='r-wt2kddjl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1707151510',owner_user_name='tempest-ServersTestManualDisk-1707151510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:13:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='81bb015501444821b1071aa660223a05',uuid=69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.662 189463 DEBUG nova.network.os_vif_util [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converting VIF {"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.663 189463 DEBUG nova.network.os_vif_util [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.664 189463 DEBUG os_vif [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.666 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.666 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b0a3d63-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.670 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.671 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.674 189463 INFO os_vif [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:e1:d1,bridge_name='br-int',has_traffic_filtering=True,id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3,network=Network(edaab37c-02f3-41cd-b2e4-fec066644901),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0a3d63-0e')#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.675 189463 INFO nova.virt.libvirt.driver [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Deleting instance files /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7_del#033[00m
Dec  2 17:13:28 compute-0 nova_compute[189459]: 2025-12-02 17:13:28.676 189463 INFO nova.virt.libvirt.driver [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Deletion of /var/lib/nova/instances/69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7_del complete#033[00m
Dec  2 17:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3-userdata-shm.mount: Deactivated successfully.
Dec  2 17:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d352cb98f1315b86953021cc35ebfc9699d2470287be91bb5d9851722221586-merged.mount: Deactivated successfully.
Dec  2 17:13:29 compute-0 podman[251271]: 2025-12-02 17:13:29.08421823 +0000 UTC m=+0.629019940 container cleanup dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:13:29 compute-0 systemd[1]: libpod-conmon-dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3.scope: Deactivated successfully.
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.155 189463 INFO nova.compute.manager [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.157 189463 DEBUG oslo.service.loopingcall [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.158 189463 DEBUG nova.compute.manager [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.159 189463 DEBUG nova.network.neutron [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:13:29 compute-0 podman[251314]: 2025-12-02 17:13:29.24576749 +0000 UTC m=+0.120487596 container remove dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.259 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[72eb915a-3e9f-4c46-8571-7df4447845b0]: (4, ('Tue Dec  2 05:13:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901 (dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3)\ndac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3\nTue Dec  2 05:13:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901 (dac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3)\ndac454e03b42fdbe0c935b84e034f6740e8c4ad26a7527e8b8a3de79fa9932c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.262 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a8acee86-605d-4d55-bfdc-3eb06366ce74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.263 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedaab37c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.265 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:29 compute-0 kernel: tapedaab37c-00: left promiscuous mode
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.272 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.277 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[83166ddb-6ec2-4778-bafc-f6db1ea83048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.286 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.297 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c02ae3cc-454a-41d8-8f17-ec8b5841439c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.299 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ef3321-61a9-4561-a315-9772f4a02f55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.331 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[3f257b7a-9a98-41bd-8e86-92d8e642aaa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514573, 'reachable_time': 29367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251332, 'error': None, 'target': 'ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dedaab37c\x2d02f3\x2d41cd\x2db2e4\x2dfec066644901.mount: Deactivated successfully.
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.336 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-edaab37c-02f3-41cd-b2e4-fec066644901 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:13:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:13:29.336 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcb5a29-aba3-4642-9eba-5ed79402ed35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.351 189463 DEBUG nova.network.neutron [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updated VIF entry in instance network info cache for port 59ab5bcf-4e2c-416f-9177-0f4f749195df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.352 189463 DEBUG nova.network.neutron [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updating instance_info_cache with network_info: [{"id": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "address": "fa:16:3e:d5:06:ff", "network": {"id": "316d7fe5-27ca-4684-94eb-0f18d776a0e1", "bridge": "br-int", "label": "tempest-ServersTestJSON-59316021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e60fbd301d4ffb8e3a4b2b966f6692", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59ab5bcf-4e", "ovs_interfaceid": "59ab5bcf-4e2c-416f-9177-0f4f749195df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.378 189463 DEBUG nova.network.neutron [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.382 189463 DEBUG nova.compute.manager [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-changed-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.382 189463 DEBUG nova.compute.manager [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Refreshing instance network info cache due to event network-changed-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.382 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.383 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.383 189463 DEBUG nova.network.neutron [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Refreshing network info cache for port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.385 189463 DEBUG oslo_concurrency.lockutils [req-a9a1ee2c-74e1-49a7-a383-d376a700b91f req-d2e39ef6-8c81-4c59-9501-612c8d27d3f2 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-02b43864-1632-4352-92f8-bbf244d2c94b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.407 189463 INFO nova.compute.manager [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Took 4.16 seconds to deallocate network for instance.#033[00m
Dec  2 17:13:29 compute-0 podman[251331]: 2025-12-02 17:13:29.416127104 +0000 UTC m=+0.098547896 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4)
Dec  2 17:13:29 compute-0 podman[251329]: 2025-12-02 17:13:29.450621516 +0000 UTC m=+0.111144539 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:13:29 compute-0 podman[251327]: 2025-12-02 17:13:29.45416329 +0000 UTC m=+0.136656224 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.461 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.462 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.555 189463 DEBUG nova.compute.provider_tree [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.570 189463 DEBUG nova.scheduler.client.report [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.592 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.631 189463 INFO nova.scheduler.client.report [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Deleted allocations for instance 02b43864-1632-4352-92f8-bbf244d2c94b#033[00m
Dec  2 17:13:29 compute-0 nova_compute[189459]: 2025-12-02 17:13:29.705 189463 DEBUG oslo_concurrency.lockutils [None req-e55c2948-5f3f-469f-bf65-b76c2149c6fc 034f84ff036e4d7ca94cfd14dd7f4967 e3e60fbd301d4ffb8e3a4b2b966f6692 - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:29 compute-0 podman[203941]: time="2025-12-02T17:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:13:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:13:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  2 17:13:30 compute-0 nova_compute[189459]: 2025-12-02 17:13:30.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: ERROR   17:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: ERROR   17:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: ERROR   17:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: ERROR   17:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: ERROR   17:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:13:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.872 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-deleted-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.873 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-unplugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.874 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.875 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.875 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.875 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] No waiting events found dispatching network-vif-unplugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.876 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-unplugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.876 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-changed-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.876 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Refreshing instance network info cache due to event network-changed-5f7c429b-020f-4314-b208-6820880dcf81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.877 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.877 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.878 189463 DEBUG nova.network.neutron [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Refreshing network info cache for port 5f7c429b-020f-4314-b208-6820880dcf81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.882 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  2 17:13:31 compute-0 nova_compute[189459]: 2025-12-02 17:13:31.898 189463 DEBUG nova.network.neutron [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.091 189463 INFO nova.compute.manager [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Took 2.93 seconds to deallocate network for instance.#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.098 189463 DEBUG nova.compute.manager [req-bcc2971e-134b-41eb-a45b-4b9461473f7c req-5250a362-cff7-47bc-a613-b7ec353070a3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-deleted-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.098 189463 INFO nova.compute.manager [req-bcc2971e-134b-41eb-a45b-4b9461473f7c req-5250a362-cff7-47bc-a613-b7ec353070a3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Neutron deleted interface 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3; detaching it from the instance and deleting it from the info cache#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.098 189463 DEBUG nova.network.neutron [req-bcc2971e-134b-41eb-a45b-4b9461473f7c req-5250a362-cff7-47bc-a613-b7ec353070a3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.123 189463 DEBUG nova.compute.manager [req-bcc2971e-134b-41eb-a45b-4b9461473f7c req-5250a362-cff7-47bc-a613-b7ec353070a3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Detach interface failed, port_id=6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3, reason: Instance 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.139 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.140 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.201 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.224 189463 DEBUG nova.compute.provider_tree [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.240 189463 DEBUG nova.scheduler.client.report [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.270 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.299 189463 INFO nova.scheduler.client.report [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Deleted allocations for instance 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.375 189463 DEBUG oslo_concurrency.lockutils [None req-1c18731c-e6b8-44ed-8e2d-005ecdc1b814 81bb015501444821b1071aa660223a05 6ed6ce0cd7d04a178c199ead64cc2506 - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.700 189463 DEBUG nova.network.neutron [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updated VIF entry in instance network info cache for port 6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.702 189463 DEBUG nova.network.neutron [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Updating instance_info_cache with network_info: [{"id": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "address": "fa:16:3e:ac:e1:d1", "network": {"id": "edaab37c-02f3-41cd-b2e4-fec066644901", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1970045439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed6ce0cd7d04a178c199ead64cc2506", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0a3d63-0e", "ovs_interfaceid": "6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.888 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.890 189463 DEBUG nova.compute.manager [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.890 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.891 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.891 189463 DEBUG oslo_concurrency.lockutils [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "02b43864-1632-4352-92f8-bbf244d2c94b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.892 189463 DEBUG nova.compute.manager [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] No waiting events found dispatching network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:32 compute-0 nova_compute[189459]: 2025-12-02 17:13:32.892 189463 WARNING nova.compute.manager [req-e3fb3e6b-4f43-4b05-a544-efea8f1be766 req-23cecc1e-a430-4e06-ba9f-65306f3dd391 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Received unexpected event network-vif-plugged-59ab5bcf-4e2c-416f-9177-0f4f749195df for instance with vm_state active and task_state deleting.#033[00m
Dec  2 17:13:33 compute-0 nova_compute[189459]: 2025-12-02 17:13:33.280 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:33 compute-0 nova_compute[189459]: 2025-12-02 17:13:33.670 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.423 189463 DEBUG nova.network.neutron [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updated VIF entry in instance network info cache for port 5f7c429b-020f-4314-b208-6820880dcf81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.424 189463 DEBUG nova.network.neutron [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.467 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.468 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.469 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.469 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.470 189463 DEBUG oslo_concurrency.lockutils [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.471 189463 DEBUG nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] No waiting events found dispatching network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.472 189463 WARNING nova.compute.manager [req-ecccb538-0058-4d76-aa6c-4d4cbc8959c7 req-358f25d7-d4f1-471e-8966-af3cbe09ec4c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Received unexpected event network-vif-plugged-6b0a3d63-0eb3-4984-8ab7-ef02818b5cf3 for instance with vm_state active and task_state deleting.#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.473 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.474 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:13:35 compute-0 nova_compute[189459]: 2025-12-02 17:13:35.475 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.204 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.224 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.225 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.226 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.227 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.227 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.258 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.259 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.260 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.260 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.340 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.407 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.408 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.504 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.865 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.867 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5216MB free_disk=72.16070175170898GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.867 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.868 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.956 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.957 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:13:37 compute-0 nova_compute[189459]: 2025-12-02 17:13:37.958 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.011 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.037 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.064 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.065 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.286 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:38 compute-0 nova_compute[189459]: 2025-12-02 17:13:38.674 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:39 compute-0 nova_compute[189459]: 2025-12-02 17:13:39.248 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:39 compute-0 nova_compute[189459]: 2025-12-02 17:13:39.248 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:39 compute-0 nova_compute[189459]: 2025-12-02 17:13:39.249 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:13:39 compute-0 nova_compute[189459]: 2025-12-02 17:13:39.249 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:13:39 compute-0 podman[251395]: 2025-12-02 17:13:39.284243077 +0000 UTC m=+0.089787245 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:13:39 compute-0 podman[251394]: 2025-12-02 17:13:39.307667706 +0000 UTC m=+0.128416186 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:13:39 compute-0 podman[251393]: 2025-12-02 17:13:39.326776771 +0000 UTC m=+0.151055314 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:13:40 compute-0 nova_compute[189459]: 2025-12-02 17:13:40.129 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695605.1284497, 02b43864-1632-4352-92f8-bbf244d2c94b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:40 compute-0 nova_compute[189459]: 2025-12-02 17:13:40.131 189463 INFO nova.compute.manager [-] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:13:40 compute-0 nova_compute[189459]: 2025-12-02 17:13:40.161 189463 DEBUG nova.compute.manager [None req-c346731f-92b3-4c30-a2ef-46956f9ebef6 - - - - - -] [instance: 02b43864-1632-4352-92f8-bbf244d2c94b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:40 compute-0 ovn_controller[97975]: 2025-12-02T17:13:40Z|00094|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:13:40 compute-0 nova_compute[189459]: 2025-12-02 17:13:40.761 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.289 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.415 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.560 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695608.5595055, 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.561 189463 INFO nova.compute.manager [-] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.599 189463 DEBUG nova.compute.manager [None req-3a21f4e2-a28f-47b7-aa9d-129b7f1fa15e - - - - - -] [instance: 69e82a3d-5bb4-4c48-b9a7-819c2bf2e4e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:13:43 compute-0 nova_compute[189459]: 2025-12-02 17:13:43.678 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:47 compute-0 nova_compute[189459]: 2025-12-02 17:13:47.798 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:48 compute-0 nova_compute[189459]: 2025-12-02 17:13:48.293 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:48 compute-0 nova_compute[189459]: 2025-12-02 17:13:48.682 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:50 compute-0 nova_compute[189459]: 2025-12-02 17:13:50.285 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:51 compute-0 podman[251462]: 2025-12-02 17:13:51.295516117 +0000 UTC m=+0.110956084 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec  2 17:13:53 compute-0 nova_compute[189459]: 2025-12-02 17:13:53.298 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:53 compute-0 nova_compute[189459]: 2025-12-02 17:13:53.478 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:53 compute-0 nova_compute[189459]: 2025-12-02 17:13:53.685 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.200 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.201 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.225 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:13:56 compute-0 podman[251484]: 2025-12-02 17:13:56.279861223 +0000 UTC m=+0.109936968 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:13:56 compute-0 podman[251485]: 2025-12-02 17:13:56.293301178 +0000 UTC m=+0.108396977 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.305 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.305 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.314 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.315 189463 INFO nova.compute.claims [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.465 189463 DEBUG nova.compute.provider_tree [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.482 189463 DEBUG nova.scheduler.client.report [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.504 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.505 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.547 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.547 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.571 189463 INFO nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.587 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.778 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.780 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.781 189463 INFO nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Creating image(s)#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.781 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.782 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.782 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.797 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.859 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.861 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.863 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.887 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.907 189463 DEBUG nova.policy [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3508c10574e947d4ac9984098e029d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f70c98cac9964fff961eb6a5439591fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.954 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:56 compute-0 nova_compute[189459]: 2025-12-02 17:13:56.955 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.073 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk 1073741824" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.074 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.075 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.157 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.159 189463 DEBUG nova.virt.disk.api [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Checking if we can resize image /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.160 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.227 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.228 189463 DEBUG nova.virt.disk.api [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Cannot resize image /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.229 189463 DEBUG nova.objects.instance [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lazy-loading 'migration_context' on Instance uuid 7ef2cae4-13df-469d-8820-5435724f49c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.249 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.249 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Ensure instance console log exists: /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.250 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.251 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:57 compute-0 nova_compute[189459]: 2025-12-02 17:13:57.251 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.300 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.402 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Successfully created port: 6642128c-0bde-4b10-95e2-8c6fd2e666fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.601 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.602 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.620 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.686 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.695 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.696 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.704 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.705 189463 INFO nova.compute.claims [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.872 189463 DEBUG nova.compute.provider_tree [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.887 189463 DEBUG nova.scheduler.client.report [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.911 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.912 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.969 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.970 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:13:58 compute-0 nova_compute[189459]: 2025-12-02 17:13:58.998 189463 INFO nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.026 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.142 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.144 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.144 189463 INFO nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Creating image(s)#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.145 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.146 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.146 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.159 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.244 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.246 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.247 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.261 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.320 189463 DEBUG nova.policy [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2224fd9bb201434186e1c5dd8456ba6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c6fe42b56e749b28f7ae970351bc360', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.325 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.326 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.365 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.366 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.367 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.422 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.424 189463 DEBUG nova.virt.disk.api [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Checking if we can resize image /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.424 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.480 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.481 189463 DEBUG nova.virt.disk.api [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Cannot resize image /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.482 189463 DEBUG nova.objects.instance [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lazy-loading 'migration_context' on Instance uuid ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.499 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.500 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Ensure instance console log exists: /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.500 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.501 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.501 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:13:59 compute-0 ovn_controller[97975]: 2025-12-02T17:13:59Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:76:b9 10.100.0.5
Dec  2 17:13:59 compute-0 ovn_controller[97975]: 2025-12-02T17:13:59Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:76:b9 10.100.0.5
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.624 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Successfully updated port: 6642128c-0bde-4b10-95e2-8c6fd2e666fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.642 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.642 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.643 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:13:59 compute-0 podman[203941]: time="2025-12-02T17:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:13:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:13:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.897 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.926 189463 DEBUG nova.compute.manager [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.927 189463 DEBUG nova.compute.manager [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing instance network info cache due to event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:13:59 compute-0 nova_compute[189459]: 2025-12-02 17:13:59.927 189463 DEBUG oslo_concurrency.lockutils [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:00 compute-0 podman[251561]: 2025-12-02 17:14:00.259020883 +0000 UTC m=+0.085512982 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec  2 17:14:00 compute-0 podman[251563]: 2025-12-02 17:14:00.272607442 +0000 UTC m=+0.080721685 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 17:14:00 compute-0 podman[251562]: 2025-12-02 17:14:00.300726045 +0000 UTC m=+0.122071388 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  2 17:14:00 compute-0 nova_compute[189459]: 2025-12-02 17:14:00.392 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Successfully created port: ea257eb8-c830-440e-9075-c4a66cef84cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.265 189463 DEBUG nova.network.neutron [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.298 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.299 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Instance network_info: |[{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.300 189463 DEBUG oslo_concurrency.lockutils [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.300 189463 DEBUG nova.network.neutron [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.304 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Start _get_guest_xml network_info=[{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.316 189463 WARNING nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.326 189463 DEBUG nova.virt.libvirt.host [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.326 189463 DEBUG nova.virt.libvirt.host [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.336 189463 DEBUG nova.virt.libvirt.host [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.336 189463 DEBUG nova.virt.libvirt.host [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.337 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.337 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.337 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.338 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.338 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.338 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.338 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.339 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.339 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.339 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.339 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.339 189463 DEBUG nova.virt.hardware [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.343 189463 DEBUG nova.virt.libvirt.vif [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1092142075',display_name='tempest-AttachInterfacesUnderV243Test-server-1092142075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1092142075',id=9,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00myeHNTP3xWyB/JPmEMDPJD/Z3tL0Gl6ipVjy90/2iHk+En9ILFQGTf5rDJoEl55ATekTFAiHehQR6buTg8Xf9pptQNp27v9TvP4zRTlRv81Vpao2vmAwLMvFdE1dKw==',key_name='tempest-keypair-310645273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f70c98cac9964fff961eb6a5439591fc',ramdisk_id='',reservation_id='r-djyqhxse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1844515369',owner_user_name='tempest-AttachInterfacesUnderV243Test-1844515369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3508c10574e947d4ac9984098e029d62',uuid=7ef2cae4-13df-469d-8820-5435724f49c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.343 189463 DEBUG nova.network.os_vif_util [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converting VIF {"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.344 189463 DEBUG nova.network.os_vif_util [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.345 189463 DEBUG nova.objects.instance [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ef2cae4-13df-469d-8820-5435724f49c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.361 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <uuid>7ef2cae4-13df-469d-8820-5435724f49c5</uuid>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <name>instance-00000009</name>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1092142075</nova:name>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:14:01</nova:creationTime>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:user uuid="3508c10574e947d4ac9984098e029d62">tempest-AttachInterfacesUnderV243Test-1844515369-project-member</nova:user>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:project uuid="f70c98cac9964fff961eb6a5439591fc">tempest-AttachInterfacesUnderV243Test-1844515369</nova:project>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        <nova:port uuid="6642128c-0bde-4b10-95e2-8c6fd2e666fc">
Dec  2 17:14:01 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <system>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="serial">7ef2cae4-13df-469d-8820-5435724f49c5</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="uuid">7ef2cae4-13df-469d-8820-5435724f49c5</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </system>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <os>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </os>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <features>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </features>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.config"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:d3:e4:18"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <target dev="tap6642128c-0b"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/console.log" append="off"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <video>
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </video>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:14:01 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:14:01 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:14:01 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:14:01 compute-0 nova_compute[189459]: </domain>
Dec  2 17:14:01 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.361 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Preparing to wait for external event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.362 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.362 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.362 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.363 189463 DEBUG nova.virt.libvirt.vif [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1092142075',display_name='tempest-AttachInterfacesUnderV243Test-server-1092142075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1092142075',id=9,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00myeHNTP3xWyB/JPmEMDPJD/Z3tL0Gl6ipVjy90/2iHk+En9ILFQGTf5rDJoEl55ATekTFAiHehQR6buTg8Xf9pptQNp27v9TvP4zRTlRv81Vpao2vmAwLMvFdE1dKw==',key_name='tempest-keypair-310645273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f70c98cac9964fff961eb6a5439591fc',ramdisk_id='',reservation_id='r-djyqhxse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1844515369',owner_user_name='tempest-AttachInterfacesUnderV243Test-1844515369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3508c10574e947d4ac9984098e029d62',uuid=7ef2cae4-13df-469d-8820-5435724f49c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.363 189463 DEBUG nova.network.os_vif_util [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converting VIF {"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.363 189463 DEBUG nova.network.os_vif_util [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.364 189463 DEBUG os_vif [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.364 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.364 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.365 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.369 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.369 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6642128c-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.369 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6642128c-0b, col_values=(('external_ids', {'iface-id': '6642128c-0bde-4b10-95e2-8c6fd2e666fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:e4:18', 'vm-uuid': '7ef2cae4-13df-469d-8820-5435724f49c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.371 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:01 compute-0 NetworkManager[56503]: <info>  [1764695641.3727] manager: (tap6642128c-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.374 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.377 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.378 189463 INFO os_vif [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b')#033[00m
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: ERROR   17:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: ERROR   17:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: ERROR   17:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: ERROR   17:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: ERROR   17:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:14:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.430 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.430 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.431 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] No VIF found with MAC fa:16:3e:d3:e4:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.431 189463 INFO nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Using config drive#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.827 189463 INFO nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Creating config drive at /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.config#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.838 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0vhhszg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:01.885 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:01.886 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:01.887 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.959 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Successfully updated port: ea257eb8-c830-440e-9075-c4a66cef84cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.980 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.981 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquired lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.981 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:14:01 compute-0 nova_compute[189459]: 2025-12-02 17:14:01.983 189463 DEBUG oslo_concurrency.processutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy0vhhszg" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.0577] manager: (tap6642128c-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  2 17:14:02 compute-0 kernel: tap6642128c-0b: entered promiscuous mode
Dec  2 17:14:02 compute-0 ovn_controller[97975]: 2025-12-02T17:14:02Z|00095|binding|INFO|Claiming lport 6642128c-0bde-4b10-95e2-8c6fd2e666fc for this chassis.
Dec  2 17:14:02 compute-0 ovn_controller[97975]: 2025-12-02T17:14:02Z|00096|binding|INFO|6642128c-0bde-4b10-95e2-8c6fd2e666fc: Claiming fa:16:3e:d3:e4:18 10.100.0.8
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.062 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.066 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.077 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:e4:18 10.100.0.8'], port_security=['fa:16:3e:d3:e4:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7ef2cae4-13df-469d-8820-5435724f49c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a320061d-433a-4deb-901d-3feb7979c906', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f70c98cac9964fff961eb6a5439591fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '798c1fb9-ee0e-49ab-b9b3-41e9074e219f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b23f6271-ee6c-46aa-a698-b66eef1ab937, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=6642128c-0bde-4b10-95e2-8c6fd2e666fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.079 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 6642128c-0bde-4b10-95e2-8c6fd2e666fc in datapath a320061d-433a-4deb-901d-3feb7979c906 bound to our chassis#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.082 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a320061d-433a-4deb-901d-3feb7979c906#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.086 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.087 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 ovn_controller[97975]: 2025-12-02T17:14:02Z|00097|binding|INFO|Setting lport 6642128c-0bde-4b10-95e2-8c6fd2e666fc ovn-installed in OVS
Dec  2 17:14:02 compute-0 ovn_controller[97975]: 2025-12-02T17:14:02Z|00098|binding|INFO|Setting lport 6642128c-0bde-4b10-95e2-8c6fd2e666fc up in Southbound
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.101 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[30aa511f-ed15-4f8c-b52b-c9d2482cda1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.103 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa320061d-41 in ovnmeta-a320061d-433a-4deb-901d-3feb7979c906 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.104 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa320061d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.105 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[705295b2-7975-4b0f-ae68-8f8da76bc394]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.106 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[9489011f-c9f9-445d-ab7c-60291e0c5fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 systemd-machined[155878]: New machine qemu-9-instance-00000009.
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.119 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[108c1ff3-78ec-4db7-b654-dc12999e5949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 systemd-udevd[251637]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:02 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.1397] device (tap6642128c-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.1448] device (tap6642128c-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.147 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2a350e11-170b-4b1c-8358-9806508c247a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.180 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0d782c-3a47-4209-9577-e648465162aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.185 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab0cbff-534e-44a4-9e99-ab25de12bd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.1876] manager: (tapa320061d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  2 17:14:02 compute-0 systemd-udevd[251640]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.187 189463 DEBUG nova.compute.manager [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-changed-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.188 189463 DEBUG nova.compute.manager [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Refreshing instance network info cache due to event network-changed-ea257eb8-c830-440e-9075-c4a66cef84cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.188 189463 DEBUG oslo_concurrency.lockutils [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.213 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[3b942343-0209-4ce7-b913-b49d519aaa99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.220 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[9e419aa0-94ea-4289-9bbb-631c489b03a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.2442] device (tapa320061d-40): carrier: link connected
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.252 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ab5941-7454-4271-9900-13a405af0be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.270 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[47d25bfa-2b54-4733-82ff-5a6cdb23f876]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa320061d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:77:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519077, 'reachable_time': 26584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251668, 'error': None, 'target': 'ovnmeta-a320061d-433a-4deb-901d-3feb7979c906', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.294 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e24f64a6-8aed-4c05-a8b0-fce008f837e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:7776'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519077, 'tstamp': 519077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251670, 'error': None, 'target': 'ovnmeta-a320061d-433a-4deb-901d-3feb7979c906', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.318 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[43fbbaa0-645d-4e95-bfde-0bf8ea0142e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa320061d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:77:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519077, 'reachable_time': 26584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251671, 'error': None, 'target': 'ovnmeta-a320061d-433a-4deb-901d-3feb7979c906', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.356 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[010fbbb2-f310-426f-8817-3b65a7a81db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.424 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ef959055-e722-43e2-bf54-dd068b6e922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.425 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa320061d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.425 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.426 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa320061d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.428 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 kernel: tapa320061d-40: entered promiscuous mode
Dec  2 17:14:02 compute-0 NetworkManager[56503]: <info>  [1764695642.4293] manager: (tapa320061d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.431 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.433 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa320061d-40, col_values=(('external_ids', {'iface-id': 'dec4099c-2b77-4702-ba34-4381a59eb57f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.435 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 ovn_controller[97975]: 2025-12-02T17:14:02Z|00099|binding|INFO|Releasing lport dec4099c-2b77-4702-ba34-4381a59eb57f from this chassis (sb_readonly=0)
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.455 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 nova_compute[189459]: 2025-12-02 17:14:02.456 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.457 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a320061d-433a-4deb-901d-3feb7979c906.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a320061d-433a-4deb-901d-3feb7979c906.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.458 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[370132e1-47d7-4695-b91b-a82d40a337e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.459 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-a320061d-433a-4deb-901d-3feb7979c906
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/a320061d-433a-4deb-901d-3feb7979c906.pid.haproxy
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID a320061d-433a-4deb-901d-3feb7979c906
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:14:02 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:02.459 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a320061d-433a-4deb-901d-3feb7979c906', 'env', 'PROCESS_TAG=haproxy-a320061d-433a-4deb-901d-3feb7979c906', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a320061d-433a-4deb-901d-3feb7979c906.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:14:02 compute-0 podman[251702]: 2025-12-02 17:14:02.933009601 +0000 UTC m=+0.071189893 container create 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:14:02 compute-0 systemd[1]: Started libpod-conmon-7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5.scope.
Dec  2 17:14:02 compute-0 podman[251702]: 2025-12-02 17:14:02.891918425 +0000 UTC m=+0.030098717 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:14:03 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e2ed6fb53cf8a32c64684c83f0b067aebe26776629ff068bb9f164f98d597/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.039 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695643.0388727, 7ef2cae4-13df-469d-8820-5435724f49c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.040 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] VM Started (Lifecycle Event)#033[00m
Dec  2 17:14:03 compute-0 podman[251702]: 2025-12-02 17:14:03.04908417 +0000 UTC m=+0.187264482 container init 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:14:03 compute-0 podman[251702]: 2025-12-02 17:14:03.056186477 +0000 UTC m=+0.194366769 container start 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.056 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.062 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695643.040091, 7ef2cae4-13df-469d-8820-5435724f49c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.063 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.082 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.087 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:03 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [NOTICE]   (251728) : New worker (251730) forked
Dec  2 17:14:03 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [NOTICE]   (251728) : Loading success.
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.104 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.304 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.619 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:14:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:03.705 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:03.706 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:14:03 compute-0 nova_compute[189459]: 2025-12-02 17:14:03.707 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:03 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:03.707 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.821 189463 DEBUG nova.compute.manager [req-05923988-6998-407b-bcdb-dfc685967bee req-69ad8283-3fa2-4982-a195-3769ee896b00 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.821 189463 DEBUG oslo_concurrency.lockutils [req-05923988-6998-407b-bcdb-dfc685967bee req-69ad8283-3fa2-4982-a195-3769ee896b00 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.822 189463 DEBUG oslo_concurrency.lockutils [req-05923988-6998-407b-bcdb-dfc685967bee req-69ad8283-3fa2-4982-a195-3769ee896b00 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.822 189463 DEBUG oslo_concurrency.lockutils [req-05923988-6998-407b-bcdb-dfc685967bee req-69ad8283-3fa2-4982-a195-3769ee896b00 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.822 189463 DEBUG nova.compute.manager [req-05923988-6998-407b-bcdb-dfc685967bee req-69ad8283-3fa2-4982-a195-3769ee896b00 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Processing event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.823 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.828 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695644.827819, 7ef2cae4-13df-469d-8820-5435724f49c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.829 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.834 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.842 189463 INFO nova.virt.libvirt.driver [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Instance spawned successfully.#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.843 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.853 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.869 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.875 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.876 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.877 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.877 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.878 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.878 189463 DEBUG nova.virt.libvirt.driver [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.888 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.926 189463 INFO nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Took 8.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:14:04 compute-0 nova_compute[189459]: 2025-12-02 17:14:04.928 189463 DEBUG nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:05 compute-0 nova_compute[189459]: 2025-12-02 17:14:05.021 189463 INFO nova.compute.manager [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Took 8.76 seconds to build instance.#033[00m
Dec  2 17:14:05 compute-0 nova_compute[189459]: 2025-12-02 17:14:05.039 189463 DEBUG oslo_concurrency.lockutils [None req-b5e540e3-beaa-404f-ba96-6280c85516b3 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.373 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.689 189463 DEBUG nova.network.neutron [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Updating instance_info_cache with network_info: [{"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.708 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Releasing lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.709 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Instance network_info: |[{"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.711 189463 DEBUG oslo_concurrency.lockutils [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.711 189463 DEBUG nova.network.neutron [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Refreshing network info cache for port ea257eb8-c830-440e-9075-c4a66cef84cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.717 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Start _get_guest_xml network_info=[{"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.728 189463 WARNING nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.752 189463 DEBUG nova.virt.libvirt.host [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.753 189463 DEBUG nova.virt.libvirt.host [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.758 189463 DEBUG nova.virt.libvirt.host [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.759 189463 DEBUG nova.virt.libvirt.host [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.760 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.760 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.761 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.761 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.762 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.762 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.763 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.763 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.764 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.764 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.765 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.765 189463 DEBUG nova.virt.hardware [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.770 189463 DEBUG nova.virt.libvirt.vif [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-566728198',display_name='tempest-ServerAddressesTestJSON-server-566728198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-566728198',id=10,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c6fe42b56e749b28f7ae970351bc360',ramdisk_id='',reservation_id='r-cnnudln5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-916249499',owner_user_name='tempest-ServerAddressesTestJSON-916249499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:59Z,user_data=None,user_id='2224fd9bb201434186e1c5dd8456ba6a',uuid=ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.771 189463 DEBUG nova.network.os_vif_util [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converting VIF {"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.771 189463 DEBUG nova.network.os_vif_util [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.772 189463 DEBUG nova.objects.instance [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.797 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <uuid>ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33</uuid>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <name>instance-0000000a</name>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:name>tempest-ServerAddressesTestJSON-server-566728198</nova:name>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:14:06</nova:creationTime>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:user uuid="2224fd9bb201434186e1c5dd8456ba6a">tempest-ServerAddressesTestJSON-916249499-project-member</nova:user>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:project uuid="5c6fe42b56e749b28f7ae970351bc360">tempest-ServerAddressesTestJSON-916249499</nova:project>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        <nova:port uuid="ea257eb8-c830-440e-9075-c4a66cef84cf">
Dec  2 17:14:06 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <system>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="serial">ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="uuid">ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </system>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <os>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </os>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <features>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </features>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.config"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:d5:49:55"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <target dev="tapea257eb8-c8"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/console.log" append="off"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <video>
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </video>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:14:06 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:14:06 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:14:06 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:14:06 compute-0 nova_compute[189459]: </domain>
Dec  2 17:14:06 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.797 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Preparing to wait for external event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.797 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.797 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.798 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.798 189463 DEBUG nova.virt.libvirt.vif [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-566728198',display_name='tempest-ServerAddressesTestJSON-server-566728198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-566728198',id=10,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c6fe42b56e749b28f7ae970351bc360',ramdisk_id='',reservation_id='r-cnnudln5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-916249499',owner_user_name='tempest-ServerAddressesTestJSON-916249499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:13:59Z,user_data=None,user_id='2224fd9bb201434186e1c5dd8456ba6a',uuid=ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.798 189463 DEBUG nova.network.os_vif_util [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converting VIF {"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.799 189463 DEBUG nova.network.os_vif_util [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.799 189463 DEBUG os_vif [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.800 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.800 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.800 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.803 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.803 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea257eb8-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.803 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea257eb8-c8, col_values=(('external_ids', {'iface-id': 'ea257eb8-c830-440e-9075-c4a66cef84cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:49:55', 'vm-uuid': 'ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.805 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:06 compute-0 NetworkManager[56503]: <info>  [1764695646.8075] manager: (tapea257eb8-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.808 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.817 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.818 189463 INFO os_vif [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8')#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.883 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.883 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.884 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] No VIF found with MAC fa:16:3e:d5:49:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.885 189463 INFO nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Using config drive#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.948 189463 DEBUG nova.compute.manager [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.949 189463 DEBUG oslo_concurrency.lockutils [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.963 189463 DEBUG oslo_concurrency.lockutils [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.963 189463 DEBUG oslo_concurrency.lockutils [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.964 189463 DEBUG nova.compute.manager [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] No waiting events found dispatching network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:06 compute-0 nova_compute[189459]: 2025-12-02 17:14:06.964 189463 WARNING nova.compute.manager [req-cc73e514-951b-4c04-869f-5b7d27a78b1e req-330a0ac0-d373-4b0f-9b58-3c145633c5ba b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received unexpected event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.206 189463 DEBUG nova.network.neutron [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updated VIF entry in instance network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.207 189463 DEBUG nova.network.neutron [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.224 189463 DEBUG oslo_concurrency.lockutils [req-eb6abd5f-ed6d-4c80-a330-700b0d6ff750 req-fc5f5bc0-a7cc-4233-87b1-3ab4df71b014 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.477 189463 INFO nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Creating config drive at /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.config#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.482 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy736l5wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.626 189463 DEBUG oslo_concurrency.processutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy736l5wf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:07 compute-0 kernel: tapea257eb8-c8: entered promiscuous mode
Dec  2 17:14:07 compute-0 ovn_controller[97975]: 2025-12-02T17:14:07Z|00100|binding|INFO|Claiming lport ea257eb8-c830-440e-9075-c4a66cef84cf for this chassis.
Dec  2 17:14:07 compute-0 ovn_controller[97975]: 2025-12-02T17:14:07Z|00101|binding|INFO|ea257eb8-c830-440e-9075-c4a66cef84cf: Claiming fa:16:3e:d5:49:55 10.100.0.8
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.698 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:07 compute-0 NetworkManager[56503]: <info>  [1764695647.7035] manager: (tapea257eb8-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.706 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:49:55 10.100.0.8'], port_security=['fa:16:3e:d5:49:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c6fe42b56e749b28f7ae970351bc360', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd181f30-97c4-4512-a3c0-8bf6e7f1ea71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503199ac-beda-447f-af7f-9c3c2714fafe, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=ea257eb8-c830-440e-9075-c4a66cef84cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.707 106835 INFO neutron.agent.ovn.metadata.agent [-] Port ea257eb8-c830-440e-9075-c4a66cef84cf in datapath 31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 bound to our chassis#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.709 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31ba0ef0-4f03-4043-9ad0-45f01d5f1a62#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.721 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ff99c417-e6b3-44fe-b57e-85650d57b4b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.722 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31ba0ef0-41 in ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.724 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31ba0ef0-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.724 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b668e9d7-c5f5-420d-b78f-acc9792b8114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.725 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[427cbbed-1557-4acc-a244-f2acb9b612a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_controller[97975]: 2025-12-02T17:14:07Z|00102|binding|INFO|Setting lport ea257eb8-c830-440e-9075-c4a66cef84cf ovn-installed in OVS
Dec  2 17:14:07 compute-0 ovn_controller[97975]: 2025-12-02T17:14:07Z|00103|binding|INFO|Setting lport ea257eb8-c830-440e-9075-c4a66cef84cf up in Southbound
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.729 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:07 compute-0 nova_compute[189459]: 2025-12-02 17:14:07.734 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.741 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[70484cad-ea83-4a9f-b380-9a35b55671c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 systemd-udevd[251761]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:07 compute-0 systemd-machined[155878]: New machine qemu-10-instance-0000000a.
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.759 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[96afd4cd-2e60-4d9b-9927-1b6b1f7a060a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec  2 17:14:07 compute-0 NetworkManager[56503]: <info>  [1764695647.7814] device (tapea257eb8-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:14:07 compute-0 NetworkManager[56503]: <info>  [1764695647.7822] device (tapea257eb8-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.797 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6aed65-4bee-44ab-8928-6ba9190187d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 systemd-udevd[251766]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:07 compute-0 NetworkManager[56503]: <info>  [1764695647.8037] manager: (tap31ba0ef0-40): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.802 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[dc99dd4a-e87c-40d6-beb6-c5b13f119ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.843 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[27406696-25e9-4179-8310-a1836ea8979c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.850 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[c75e3ad7-c1c4-4dc4-bc4e-70a2e879c482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 NetworkManager[56503]: <info>  [1764695647.8849] device (tap31ba0ef0-40): carrier: link connected
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.897 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[113ea797-5071-4c1f-916d-686261986745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.920 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5888bccf-c295-4a2b-ad98-3b7c989b8eea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31ba0ef0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1f:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519641, 'reachable_time': 42485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251792, 'error': None, 'target': 'ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.942 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee28583-e7aa-4d18-bb73-62d6fe4b59af]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:1fce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519641, 'tstamp': 519641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251793, 'error': None, 'target': 'ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:07.965 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[71c5a77f-1ec2-4399-970d-990440ac8a18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31ba0ef0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1f:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519641, 'reachable_time': 42485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251794, 'error': None, 'target': 'ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.012 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ae50f3-9c10-4de3-823d-109b2618083f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.097 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cf9536-2ee0-42d6-b0e4-954bc1f9bcdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.098 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31ba0ef0-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.099 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.099 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31ba0ef0-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:08 compute-0 NetworkManager[56503]: <info>  [1764695648.1024] manager: (tap31ba0ef0-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.101 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:08 compute-0 kernel: tap31ba0ef0-40: entered promiscuous mode
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.107 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31ba0ef0-40, col_values=(('external_ids', {'iface-id': '3234e63f-59e7-43fe-a7af-b1780d6bceb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.109 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:08 compute-0 ovn_controller[97975]: 2025-12-02T17:14:08Z|00104|binding|INFO|Releasing lport 3234e63f-59e7-43fe-a7af-b1780d6bceb7 from this chassis (sb_readonly=0)
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.113 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.114 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31ba0ef0-4f03-4043-9ad0-45f01d5f1a62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31ba0ef0-4f03-4043-9ad0-45f01d5f1a62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.115 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d80ba12f-540d-4fde-acb5-17006e867a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.116 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/31ba0ef0-4f03-4043-9ad0-45f01d5f1a62.pid.haproxy
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 31ba0ef0-4f03-4043-9ad0-45f01d5f1a62
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:14:08 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:08.117 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'env', 'PROCESS_TAG=haproxy-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31ba0ef0-4f03-4043-9ad0-45f01d5f1a62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.128 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.306 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.357 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695648.357405, ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.359 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] VM Started (Lifecycle Event)#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.397 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.403 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695648.3575099, ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.404 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.423 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.429 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:08 compute-0 nova_compute[189459]: 2025-12-02 17:14:08.453 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:08 compute-0 podman[251832]: 2025-12-02 17:14:08.577475568 +0000 UTC m=+0.073624238 container create 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:14:08 compute-0 systemd[1]: Started libpod-conmon-8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8.scope.
Dec  2 17:14:08 compute-0 podman[251832]: 2025-12-02 17:14:08.539115854 +0000 UTC m=+0.035264554 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:14:08 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b546512017bcc827f74273e703f68fae74134c80d172e2d3beedb90493e941b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:14:08 compute-0 podman[251832]: 2025-12-02 17:14:08.704417553 +0000 UTC m=+0.200566263 container init 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  2 17:14:08 compute-0 podman[251832]: 2025-12-02 17:14:08.712470276 +0000 UTC m=+0.208618956 container start 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 17:14:08 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [NOTICE]   (251852) : New worker (251854) forked
Dec  2 17:14:08 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [NOTICE]   (251852) : Loading success.
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.483 189463 DEBUG nova.network.neutron [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Updated VIF entry in instance network info cache for port ea257eb8-c830-440e-9075-c4a66cef84cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.486 189463 DEBUG nova.network.neutron [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Updating instance_info_cache with network_info: [{"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.512 189463 DEBUG oslo_concurrency.lockutils [req-35e2f6eb-49f5-4316-9a9d-1f429ff8f861 req-348a4aea-4821-4e03-8796-f0c4c6120cb1 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.550 189463 DEBUG nova.compute.manager [req-aa4a9fba-fd09-42c1-a818-bbf6c46c2015 req-6cbd82f2-8a48-4191-8b6c-446d2b44122b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.552 189463 DEBUG oslo_concurrency.lockutils [req-aa4a9fba-fd09-42c1-a818-bbf6c46c2015 req-6cbd82f2-8a48-4191-8b6c-446d2b44122b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.553 189463 DEBUG oslo_concurrency.lockutils [req-aa4a9fba-fd09-42c1-a818-bbf6c46c2015 req-6cbd82f2-8a48-4191-8b6c-446d2b44122b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.553 189463 DEBUG oslo_concurrency.lockutils [req-aa4a9fba-fd09-42c1-a818-bbf6c46c2015 req-6cbd82f2-8a48-4191-8b6c-446d2b44122b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.554 189463 DEBUG nova.compute.manager [req-aa4a9fba-fd09-42c1-a818-bbf6c46c2015 req-6cbd82f2-8a48-4191-8b6c-446d2b44122b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Processing event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.555 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.562 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695649.5607026, ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.563 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.566 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.573 189463 INFO nova.virt.libvirt.driver [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Instance spawned successfully.#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.574 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.615 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.625 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.631 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.633 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.633 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.634 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.635 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.635 189463 DEBUG nova.virt.libvirt.driver [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.661 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.711 189463 INFO nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Took 10.57 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.712 189463 DEBUG nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.784 189463 INFO nova.compute.manager [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Took 11.11 seconds to build instance.#033[00m
Dec  2 17:14:09 compute-0 nova_compute[189459]: 2025-12-02 17:14:09.813 189463 DEBUG oslo_concurrency.lockutils [None req-c78601f7-4513-431a-9b8f-e7a9783ed48d 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:10 compute-0 podman[251864]: 2025-12-02 17:14:10.292474695 +0000 UTC m=+0.113450260 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:14:10 compute-0 podman[251863]: 2025-12-02 17:14:10.315543085 +0000 UTC m=+0.138614736 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  2 17:14:10 compute-0 podman[251865]: 2025-12-02 17:14:10.32558762 +0000 UTC m=+0.132715879 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.805 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.869 189463 DEBUG nova.compute.manager [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.870 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.872 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.872 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.874 189463 DEBUG nova.compute.manager [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] No waiting events found dispatching network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.875 189463 WARNING nova.compute.manager [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received unexpected event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.876 189463 DEBUG nova.compute.manager [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.876 189463 DEBUG nova.compute.manager [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing instance network info cache due to event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.878 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.878 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:11 compute-0 nova_compute[189459]: 2025-12-02 17:14:11.879 189463 DEBUG nova.network.neutron [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.372 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.372 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.373 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.373 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.373 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.375 189463 INFO nova.compute.manager [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Terminating instance#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.376 189463 DEBUG nova.compute.manager [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:14:12 compute-0 kernel: tapea257eb8-c8 (unregistering): left promiscuous mode
Dec  2 17:14:12 compute-0 NetworkManager[56503]: <info>  [1764695652.4141] device (tapea257eb8-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:14:12 compute-0 ovn_controller[97975]: 2025-12-02T17:14:12Z|00105|binding|INFO|Releasing lport ea257eb8-c830-440e-9075-c4a66cef84cf from this chassis (sb_readonly=0)
Dec  2 17:14:12 compute-0 ovn_controller[97975]: 2025-12-02T17:14:12Z|00106|binding|INFO|Setting lport ea257eb8-c830-440e-9075-c4a66cef84cf down in Southbound
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.442 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 ovn_controller[97975]: 2025-12-02T17:14:12Z|00107|binding|INFO|Removing iface tapea257eb8-c8 ovn-installed in OVS
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.452 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:49:55 10.100.0.8'], port_security=['fa:16:3e:d5:49:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c6fe42b56e749b28f7ae970351bc360', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd181f30-97c4-4512-a3c0-8bf6e7f1ea71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503199ac-beda-447f-af7f-9c3c2714fafe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=ea257eb8-c830-440e-9075-c4a66cef84cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.454 106835 INFO neutron.agent.ovn.metadata.agent [-] Port ea257eb8-c830-440e-9075-c4a66cef84cf in datapath 31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 unbound from our chassis#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.457 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.457 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  2 17:14:12 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 3.634s CPU time.
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.459 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[235a6b1e-79d2-47d4-8a19-3b92a6740ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.461 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 namespace which is not needed anymore#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.461 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 systemd-machined[155878]: Machine qemu-10-instance-0000000a terminated.
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.684 189463 INFO nova.virt.libvirt.driver [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Instance destroyed successfully.#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.684 189463 DEBUG nova.objects.instance [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lazy-loading 'resources' on Instance uuid ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [NOTICE]   (251852) : haproxy version is 2.8.14-c23fe91
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [NOTICE]   (251852) : path to executable is /usr/sbin/haproxy
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [WARNING]  (251852) : Exiting Master process...
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [WARNING]  (251852) : Exiting Master process...
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [ALERT]    (251852) : Current worker (251854) exited with code 143 (Terminated)
Dec  2 17:14:12 compute-0 neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62[251848]: [WARNING]  (251852) : All workers exited. Exiting... (0)
Dec  2 17:14:12 compute-0 systemd[1]: libpod-8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8.scope: Deactivated successfully.
Dec  2 17:14:12 compute-0 podman[251960]: 2025-12-02 17:14:12.699514048 +0000 UTC m=+0.081853785 container died 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.699 189463 DEBUG nova.virt.libvirt.vif [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-566728198',display_name='tempest-ServerAddressesTestJSON-server-566728198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-566728198',id=10,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c6fe42b56e749b28f7ae970351bc360',ramdisk_id='',reservation_id='r-cnnudln5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-916249499',owner_user_name='tempest-ServerAddressesTestJSON-916249499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:14:09Z,user_data=None,user_id='2224fd9bb201434186e1c5dd8456ba6a',uuid=ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.699 189463 DEBUG nova.network.os_vif_util [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converting VIF {"id": "ea257eb8-c830-440e-9075-c4a66cef84cf", "address": "fa:16:3e:d5:49:55", "network": {"id": "31ba0ef0-4f03-4043-9ad0-45f01d5f1a62", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-113020155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c6fe42b56e749b28f7ae970351bc360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea257eb8-c8", "ovs_interfaceid": "ea257eb8-c830-440e-9075-c4a66cef84cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.700 189463 DEBUG nova.network.os_vif_util [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.701 189463 DEBUG os_vif [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.702 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.703 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea257eb8-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.705 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.708 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.711 189463 INFO os_vif [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:49:55,bridge_name='br-int',has_traffic_filtering=True,id=ea257eb8-c830-440e-9075-c4a66cef84cf,network=Network(31ba0ef0-4f03-4043-9ad0-45f01d5f1a62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea257eb8-c8')#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.711 189463 INFO nova.virt.libvirt.driver [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Deleting instance files /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33_del#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.712 189463 INFO nova.virt.libvirt.driver [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Deletion of /var/lib/nova/instances/ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33_del complete#033[00m
Dec  2 17:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8-userdata-shm.mount: Deactivated successfully.
Dec  2 17:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b546512017bcc827f74273e703f68fae74134c80d172e2d3beedb90493e941b7-merged.mount: Deactivated successfully.
Dec  2 17:14:12 compute-0 podman[251960]: 2025-12-02 17:14:12.75744695 +0000 UTC m=+0.139786677 container cleanup 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.776 189463 INFO nova.compute.manager [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.777 189463 DEBUG oslo.service.loopingcall [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.778 189463 DEBUG nova.compute.manager [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.779 189463 DEBUG nova.network.neutron [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:14:12 compute-0 systemd[1]: libpod-conmon-8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8.scope: Deactivated successfully.
Dec  2 17:14:12 compute-0 podman[252003]: 2025-12-02 17:14:12.855534453 +0000 UTC m=+0.065218285 container remove 8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.882 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[cea42551-3b93-4376-9974-de04c5c44dab]: (4, ('Tue Dec  2 05:14:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 (8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8)\n8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8\nTue Dec  2 05:14:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 (8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8)\n8fdc3abcdecc8b89694c8010fa9e7e8d80109867a347089c79974b3678b31de8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.885 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f7189e96-fa4c-4fc2-b232-582eaa0f897b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.888 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.887 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31ba0ef0-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.892 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 kernel: tap31ba0ef0-40: left promiscuous mode
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.908 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 nova_compute[189459]: 2025-12-02 17:14:12.911 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.915 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[aae0b751-eaa0-44fc-8405-b3492e194199]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.934 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c4df3bf3-568d-4a88-afbc-516b4b9f0ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.936 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e1943ff0-c1db-4254-837e-9cd2b4e35c76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.956 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[662f1532-fea3-4b5a-b0db-b8320267494c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519632, 'reachable_time': 29527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252015, 'error': None, 'target': 'ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d31ba0ef0\x2d4f03\x2d4043\x2d9ad0\x2d45f01d5f1a62.mount: Deactivated successfully.
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.964 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31ba0ef0-4f03-4043-9ad0-45f01d5f1a62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:14:12 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:12.964 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[34ba823c-b93f-4492-a36e-57b008d8894c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.310 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.694 189463 DEBUG nova.network.neutron [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.713 189463 INFO nova.compute.manager [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Took 0.93 seconds to deallocate network for instance.#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.769 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.770 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.804 189463 DEBUG nova.compute.manager [req-f893042c-f677-47eb-9f2c-ede5af1d1d05 req-a690f3d3-65fb-476a-9527-d2472aee66d3 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-vif-deleted-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.889 189463 DEBUG nova.compute.provider_tree [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.913 189463 DEBUG nova.scheduler.client.report [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:14:13 compute-0 nova_compute[189459]: 2025-12-02 17:14:13.937 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.179 189463 INFO nova.scheduler.client.report [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Deleted allocations for instance ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.250 189463 DEBUG oslo_concurrency.lockutils [None req-3a8135ac-3470-4c48-862e-6a7ab7074380 2224fd9bb201434186e1c5dd8456ba6a 5c6fe42b56e749b28f7ae970351bc360 - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.333 189463 DEBUG nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-vif-unplugged-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.334 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.334 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.335 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.335 189463 DEBUG nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] No waiting events found dispatching network-vif-unplugged-ea257eb8-c830-440e-9075-c4a66cef84cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.335 189463 WARNING nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received unexpected event network-vif-unplugged-ea257eb8-c830-440e-9075-c4a66cef84cf for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.335 189463 DEBUG nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.336 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.336 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.337 189463 DEBUG oslo_concurrency.lockutils [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.337 189463 DEBUG nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] No waiting events found dispatching network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.337 189463 WARNING nova.compute.manager [req-101781dc-c40b-40ba-9bac-85edf8e6c5a5 req-06da40f6-d6e6-4b99-b740-ab96f22b2a5e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Received unexpected event network-vif-plugged-ea257eb8-c830-440e-9075-c4a66cef84cf for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.775 189463 DEBUG nova.network.neutron [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updated VIF entry in instance network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.775 189463 DEBUG nova.network.neutron [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:14 compute-0 nova_compute[189459]: 2025-12-02 17:14:14.849 189463 DEBUG oslo_concurrency.lockutils [req-b930932c-41d7-471d-91d6-807825a45d3e req-deaa8cc1-e47b-48d7-9d1c-50e474a4adec b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:17 compute-0 nova_compute[189459]: 2025-12-02 17:14:17.706 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:18 compute-0 nova_compute[189459]: 2025-12-02 17:14:18.313 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:21 compute-0 ovn_controller[97975]: 2025-12-02T17:14:21Z|00108|binding|INFO|Releasing lport dec4099c-2b77-4702-ba34-4381a59eb57f from this chassis (sb_readonly=0)
Dec  2 17:14:21 compute-0 ovn_controller[97975]: 2025-12-02T17:14:21Z|00109|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:14:21 compute-0 nova_compute[189459]: 2025-12-02 17:14:21.560 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:21 compute-0 nova_compute[189459]: 2025-12-02 17:14:21.815 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:22 compute-0 podman[252019]: 2025-12-02 17:14:22.281803555 +0000 UTC m=+0.105998273 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec  2 17:14:22 compute-0 nova_compute[189459]: 2025-12-02 17:14:22.711 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:23 compute-0 nova_compute[189459]: 2025-12-02 17:14:23.316 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:27 compute-0 podman[252041]: 2025-12-02 17:14:27.271194415 +0000 UTC m=+0.090868284 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:14:27 compute-0 podman[252040]: 2025-12-02 17:14:27.284024644 +0000 UTC m=+0.105345686 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:14:27 compute-0 nova_compute[189459]: 2025-12-02 17:14:27.680 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695652.6786227, ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:27 compute-0 nova_compute[189459]: 2025-12-02 17:14:27.681 189463 INFO nova.compute.manager [-] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:14:27 compute-0 nova_compute[189459]: 2025-12-02 17:14:27.710 189463 DEBUG nova.compute.manager [None req-b733e5e9-2fb2-4f75-ace1-440e9066464a - - - - - -] [instance: ef1bbbfb-1dcd-4d19-81e0-8ee1e861cf33] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:27 compute-0 nova_compute[189459]: 2025-12-02 17:14:27.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:28 compute-0 nova_compute[189459]: 2025-12-02 17:14:28.320 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:28 compute-0 nova_compute[189459]: 2025-12-02 17:14:28.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:28 compute-0 nova_compute[189459]: 2025-12-02 17:14:28.437 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.529 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:29 compute-0 podman[203941]: time="2025-12-02T17:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:14:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.780 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.781 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5247 "" "Go-http-client/1.1"
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.800 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.887 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.888 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.897 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:14:29 compute-0 nova_compute[189459]: 2025-12-02 17:14:29.898 189463 INFO nova.compute.claims [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.078 189463 DEBUG nova.compute.provider_tree [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.094 189463 DEBUG nova.scheduler.client.report [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.133 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.134 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.199 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.199 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.215 189463 INFO nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.232 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.494 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.496 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.497 189463 INFO nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Creating image(s)#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.497 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.498 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.499 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.512 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.607 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.610 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.611 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.629 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.697 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.700 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.749 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.751 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.752 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.825 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.826 189463 DEBUG nova.virt.disk.api [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Checking if we can resize image /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.827 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.904 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.906 189463 DEBUG nova.virt.disk.api [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Cannot resize image /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.906 189463 DEBUG nova.objects.instance [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'migration_context' on Instance uuid c42974d1-ca42-4b24-bf99-14f43ee59916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.932 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.933 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Ensure instance console log exists: /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.934 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.935 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:30 compute-0 nova_compute[189459]: 2025-12-02 17:14:30.935 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.182 189463 DEBUG nova.policy [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed4b2c7904414b1cb5c9314cf52d7eff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:14:31 compute-0 podman[252094]: 2025-12-02 17:14:31.280838533 +0000 UTC m=+0.091575242 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  2 17:14:31 compute-0 podman[252096]: 2025-12-02 17:14:31.290013915 +0000 UTC m=+0.089245780 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 17:14:31 compute-0 podman[252095]: 2025-12-02 17:14:31.321660902 +0000 UTC m=+0.128392565 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: ERROR   17:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: ERROR   17:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: ERROR   17:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: ERROR   17:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: ERROR   17:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:14:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.441 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.635 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.635 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.635 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:14:31 compute-0 nova_compute[189459]: 2025-12-02 17:14:31.635 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:32 compute-0 nova_compute[189459]: 2025-12-02 17:14:32.718 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.321 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.772 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Successfully created port: 84301772-f4d5-42b6-bb8d-a3217c3c9135 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.818 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.842 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.842 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:14:33 compute-0 nova_compute[189459]: 2025-12-02 17:14:33.843 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.434 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.436 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.532 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.599 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.601 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.676 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.685 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.747 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.748 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.816 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:35 compute-0 nova_compute[189459]: 2025-12-02 17:14:35.961 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Successfully updated port: 84301772-f4d5-42b6-bb8d-a3217c3c9135 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.023 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.024 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquired lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.025 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.263 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.264 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=72.13177871704102GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.265 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.265 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.297 189463 DEBUG nova.compute.manager [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.298 189463 DEBUG nova.compute.manager [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing instance network info cache due to event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.298 189463 DEBUG oslo_concurrency.lockutils [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.358 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.359 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 7ef2cae4-13df-469d-8820-5435724f49c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.360 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c42974d1-ca42-4b24-bf99-14f43ee59916 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.360 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.361 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.431 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.432 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.433 189463 INFO nova.compute.manager [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Rebooting instance#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.443 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.447 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.448 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquired lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.449 189463 DEBUG nova.network.neutron [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.461 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.465 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.487 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.488 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:36 compute-0 nova_compute[189459]: 2025-12-02 17:14:36.803 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.486 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.615 189463 DEBUG nova.network.neutron [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.637 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Releasing lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.638 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Instance network_info: |[{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.639 189463 DEBUG oslo_concurrency.lockutils [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.640 189463 DEBUG nova.network.neutron [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.644 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Start _get_guest_xml network_info=[{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.653 189463 WARNING nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.664 189463 DEBUG nova.virt.libvirt.host [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.666 189463 DEBUG nova.virt.libvirt.host [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.672 189463 DEBUG nova.virt.libvirt.host [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.673 189463 DEBUG nova.virt.libvirt.host [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.675 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.676 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.678 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.679 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.680 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.681 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.682 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.683 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.684 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.685 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.686 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.687 189463 DEBUG nova.virt.hardware [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.693 189463 DEBUG nova.virt.libvirt.vif [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:14:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1141860031',display_name='tempest-TestNetworkBasicOps-server-1141860031',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1141860031',id=11,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3xMbBwXOKYpHyDLgj1no2pesb80gxFUUAcWy/VgL8qtPGbYtR1FfC4raPyHH8Nhv/kDJBDHp89xaBomyA3RfYH/tyxB0ptma7jHSFm26Tytf6R1iZbF1KGp8fwu9OdFQ==',key_name='tempest-TestNetworkBasicOps-292184376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zfgnxf9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:14:30Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=c42974d1-ca42-4b24-bf99-14f43ee59916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.693 189463 DEBUG nova.network.os_vif_util [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.695 189463 DEBUG nova.network.os_vif_util [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.696 189463 DEBUG nova.objects.instance [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid c42974d1-ca42-4b24-bf99-14f43ee59916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.713 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <uuid>c42974d1-ca42-4b24-bf99-14f43ee59916</uuid>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <name>instance-0000000b</name>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:name>tempest-TestNetworkBasicOps-server-1141860031</nova:name>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:14:37</nova:creationTime>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:user uuid="ed4b2c7904414b1cb5c9314cf52d7eff">tempest-TestNetworkBasicOps-592268676-project-member</nova:user>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:project uuid="b5fdb2e066254ddbbd43316d1a1a75b2">tempest-TestNetworkBasicOps-592268676</nova:project>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        <nova:port uuid="84301772-f4d5-42b6-bb8d-a3217c3c9135">
Dec  2 17:14:37 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <system>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="serial">c42974d1-ca42-4b24-bf99-14f43ee59916</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="uuid">c42974d1-ca42-4b24-bf99-14f43ee59916</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </system>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <os>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </os>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <features>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </features>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.config"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:a9:d2:17"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <target dev="tap84301772-f4"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/console.log" append="off"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <video>
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </video>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:14:37 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:14:37 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:14:37 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:14:37 compute-0 nova_compute[189459]: </domain>
Dec  2 17:14:37 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.726 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Preparing to wait for external event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.726 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.726 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.726 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.727 189463 DEBUG nova.virt.libvirt.vif [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:14:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1141860031',display_name='tempest-TestNetworkBasicOps-server-1141860031',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1141860031',id=11,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3xMbBwXOKYpHyDLgj1no2pesb80gxFUUAcWy/VgL8qtPGbYtR1FfC4raPyHH8Nhv/kDJBDHp89xaBomyA3RfYH/tyxB0ptma7jHSFm26Tytf6R1iZbF1KGp8fwu9OdFQ==',key_name='tempest-TestNetworkBasicOps-292184376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zfgnxf9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:14:30Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=c42974d1-ca42-4b24-bf99-14f43ee59916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.728 189463 DEBUG nova.network.os_vif_util [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.728 189463 DEBUG nova.network.os_vif_util [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.729 189463 DEBUG os_vif [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.730 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.730 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.731 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.731 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.734 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.735 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84301772-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.735 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84301772-f4, col_values=(('external_ids', {'iface-id': '84301772-f4d5-42b6-bb8d-a3217c3c9135', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:d2:17', 'vm-uuid': 'c42974d1-ca42-4b24-bf99-14f43ee59916'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.739 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 NetworkManager[56503]: <info>  [1764695677.7409] manager: (tap84301772-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.741 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.749 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.750 189463 INFO os_vif [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4')#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.814 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.815 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.815 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No VIF found with MAC fa:16:3e:a9:d2:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:14:37 compute-0 nova_compute[189459]: 2025-12-02 17:14:37.816 189463 INFO nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Using config drive#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.000 189463 DEBUG nova.network.neutron [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.018 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Releasing lock "refresh_cache-4994ed6b-5e0c-4061-a84c-f46ccf29489f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.020 189463 DEBUG nova.compute.manager [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:38 compute-0 kernel: tap5f7c429b-02 (unregistering): left promiscuous mode
Dec  2 17:14:38 compute-0 NetworkManager[56503]: <info>  [1764695678.2130] device (tap5f7c429b-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:14:38 compute-0 ovn_controller[97975]: 2025-12-02T17:14:38Z|00110|binding|INFO|Releasing lport 5f7c429b-020f-4314-b208-6820880dcf81 from this chassis (sb_readonly=0)
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.222 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 ovn_controller[97975]: 2025-12-02T17:14:38Z|00111|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 down in Southbound
Dec  2 17:14:38 compute-0 ovn_controller[97975]: 2025-12-02T17:14:38Z|00112|binding|INFO|Removing iface tap5f7c429b-02 ovn-installed in OVS
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.234 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:b9 10.100.0.5'], port_security=['fa:16:3e:df:76:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4994ed6b-5e0c-4061-a84c-f46ccf29489f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95abfdbd702a49dc89fc01dd45a4e014', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8a6a28c-4df2-4758-a58f-e25b3a4dbf0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ba938b6-3c05-41dd-ab92-658c8cac6fe8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=5f7c429b-020f-4314-b208-6820880dcf81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.235 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 5f7c429b-020f-4314-b208-6820880dcf81 in datapath 5882ec1f-b595-4c00-871f-f9ec4c7212bd unbound from our chassis#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.237 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5882ec1f-b595-4c00-871f-f9ec4c7212bd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.236 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.238 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c7a016-d769-4124-9068-8fb86eb3c8d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.238 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd namespace which is not needed anymore#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.245 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  2 17:14:38 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 42.493s CPU time.
Dec  2 17:14:38 compute-0 systemd-machined[155878]: Machine qemu-7-instance-00000007 terminated.
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.323 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [NOTICE]   (251019) : haproxy version is 2.8.14-c23fe91
Dec  2 17:14:38 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [NOTICE]   (251019) : path to executable is /usr/sbin/haproxy
Dec  2 17:14:38 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [WARNING]  (251019) : Exiting Master process...
Dec  2 17:14:38 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [ALERT]    (251019) : Current worker (251021) exited with code 143 (Terminated)
Dec  2 17:14:38 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[251015]: [WARNING]  (251019) : All workers exited. Exiting... (0)
Dec  2 17:14:38 compute-0 systemd[1]: libpod-41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604.scope: Deactivated successfully.
Dec  2 17:14:38 compute-0 podman[252190]: 2025-12-02 17:14:38.440570465 +0000 UTC m=+0.088779888 container died 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.442 189463 INFO nova.virt.libvirt.driver [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance destroyed successfully.#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.443 189463 DEBUG nova.objects.instance [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'resources' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.476 189463 DEBUG nova.virt.libvirt.vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:14:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.478 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.479 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.480 189463 DEBUG os_vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.482 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.483 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f7c429b-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.485 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.488 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.491 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.494 189463 INFO os_vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02')#033[00m
Dec  2 17:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604-userdata-shm.mount: Deactivated successfully.
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.505 189463 DEBUG nova.virt.libvirt.driver [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start _get_guest_xml network_info=[{"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-196be997edee2b9b798db254bad0a6dbf221517b2e2f1915f34f1f7ed6d787e2-merged.mount: Deactivated successfully.
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.519 189463 WARNING nova.virt.libvirt.driver [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:14:38 compute-0 podman[252190]: 2025-12-02 17:14:38.526921638 +0000 UTC m=+0.175131001 container cleanup 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.527 189463 DEBUG nova.virt.libvirt.host [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.529 189463 DEBUG nova.virt.libvirt.host [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.539 189463 DEBUG nova.virt.libvirt.host [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.540 189463 DEBUG nova.virt.libvirt.host [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.540 189463 DEBUG nova.virt.libvirt.driver [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.541 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.541 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.541 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.542 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.542 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.542 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.543 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.543 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.543 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.544 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.544 189463 DEBUG nova.virt.hardware [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.544 189463 DEBUG nova.objects.instance [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:38 compute-0 systemd[1]: libpod-conmon-41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604.scope: Deactivated successfully.
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.570 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 podman[252241]: 2025-12-02 17:14:38.617640706 +0000 UTC m=+0.058605660 container remove 41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.625 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6fef2a-d204-4851-94e4-d8c4507896ff]: (4, ('Tue Dec  2 05:14:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd (41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604)\n41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604\nTue Dec  2 05:14:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd (41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604)\n41f6294b91b7bc3032b9767894739d947310bdfc701882b470eb86e67a07f604\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.628 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdf7e03-6c2c-4b91-9094-7f3af86d9eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.629 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5882ec1f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.631 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 kernel: tap5882ec1f-b0: left promiscuous mode
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.637 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.638 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.639 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.640 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.641 189463 DEBUG nova.virt.libvirt.vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:14:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.642 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.643 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.644 189463 DEBUG nova.objects.instance [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.648 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.651 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.652 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[9338ade8-462b-40ae-8c73-edbdbd0a3851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.665 189463 DEBUG nova.virt.libvirt.driver [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <uuid>4994ed6b-5e0c-4061-a84c-f46ccf29489f</uuid>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <name>instance-00000007</name>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:name>tempest-ServerActionsTestJSON-server-254489110</nova:name>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:14:38</nova:creationTime>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:user uuid="c800961435cb4a418a6ee67240a574fe">tempest-ServerActionsTestJSON-897427034-project-member</nova:user>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:project uuid="95abfdbd702a49dc89fc01dd45a4e014">tempest-ServerActionsTestJSON-897427034</nova:project>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        <nova:port uuid="5f7c429b-020f-4314-b208-6820880dcf81">
Dec  2 17:14:38 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <system>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="serial">4994ed6b-5e0c-4061-a84c-f46ccf29489f</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="uuid">4994ed6b-5e0c-4061-a84c-f46ccf29489f</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </system>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <os>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </os>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <features>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </features>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.config"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:df:76:b9"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <target dev="tap5f7c429b-02"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/console.log" append="off"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <video>
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </video>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <input type="keyboard" bus="usb"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:14:38 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:14:38 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:14:38 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:14:38 compute-0 nova_compute[189459]: </domain>
Dec  2 17:14:38 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.667 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.667 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[406640c5-c538-4c2c-8d8e-15b1b1524def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.669 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[3e687587-a9e3-4437-a7b2-350850c89c30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.692 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[eae0fccb-cfdf-4bf4-b45f-84d8885c79b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514444, 'reachable_time': 42301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252264, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.695 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:14:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:38.695 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[73a00da2-8922-47d0-a1bc-a5a682a91a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d5882ec1f\x2db595\x2d4c00\x2d871f\x2df9ec4c7212bd.mount: Deactivated successfully.
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.733 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.734 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.791 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.794 189463 DEBUG nova.objects.instance [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.827 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.898 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.899 189463 DEBUG nova.virt.disk.api [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Checking if we can resize image /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.899 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.939 189463 INFO nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Creating config drive at /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.config#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.943 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt03qu9xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.964 189463 DEBUG oslo_concurrency.processutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.965 189463 DEBUG nova.virt.disk.api [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Cannot resize image /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.965 189463 DEBUG nova.objects.instance [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'migration_context' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.983 189463 DEBUG nova.virt.libvirt.vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:14:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.984 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.985 189463 DEBUG nova.network.os_vif_util [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.985 189463 DEBUG os_vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.987 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.987 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.992 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.992 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f7c429b-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.993 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f7c429b-02, col_values=(('external_ids', {'iface-id': '5f7c429b-020f-4314-b208-6820880dcf81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:76:b9', 'vm-uuid': '4994ed6b-5e0c-4061-a84c-f46ccf29489f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.995 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:38 compute-0 NetworkManager[56503]: <info>  [1764695678.9966] manager: (tap5f7c429b-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec  2 17:14:38 compute-0 nova_compute[189459]: 2025-12-02 17:14:38.997 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.005 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.007 189463 INFO os_vif [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02')#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.027 189463 DEBUG nova.compute.manager [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.028 189463 DEBUG oslo_concurrency.lockutils [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.028 189463 DEBUG oslo_concurrency.lockutils [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.028 189463 DEBUG oslo_concurrency.lockutils [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.028 189463 DEBUG nova.compute.manager [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.028 189463 WARNING nova.compute.manager [req-bddb6745-cf76-4b90-9706-6fb35ea8911e req-6a4dc24f-0548-43a8-9455-ad15f7122084 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.074 189463 DEBUG oslo_concurrency.processutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt03qu9xh" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:14:39 compute-0 kernel: tap5f7c429b-02: entered promiscuous mode
Dec  2 17:14:39 compute-0 systemd-udevd[252169]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00113|binding|INFO|Claiming lport 5f7c429b-020f-4314-b208-6820880dcf81 for this chassis.
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00114|binding|INFO|5f7c429b-020f-4314-b208-6820880dcf81: Claiming fa:16:3e:df:76:b9 10.100.0.5
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.120 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1233] manager: (tap5f7c429b-02): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00115|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 ovn-installed in OVS
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00116|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 up in Southbound
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.133 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:b9 10.100.0.5'], port_security=['fa:16:3e:df:76:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4994ed6b-5e0c-4061-a84c-f46ccf29489f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95abfdbd702a49dc89fc01dd45a4e014', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c8a6a28c-4df2-4758-a58f-e25b3a4dbf0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ba938b6-3c05-41dd-ab92-658c8cac6fe8, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=5f7c429b-020f-4314-b208-6820880dcf81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.135 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 5f7c429b-020f-4314-b208-6820880dcf81 in datapath 5882ec1f-b595-4c00-871f-f9ec4c7212bd bound to our chassis#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.138 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5882ec1f-b595-4c00-871f-f9ec4c7212bd#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1414] device (tap5f7c429b-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.141 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1425] device (tap5f7c429b-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.152 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.158 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a71388d9-4b20-4ab0-81d2-0114ba695bb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.159 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5882ec1f-b1 in ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1619] manager: (tap84301772-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.161 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5882ec1f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:14:39 compute-0 kernel: tap84301772-f4: entered promiscuous mode
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.162 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[13a1a361-9a13-4bcf-96c4-6a8196e49c81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.166 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.165 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[da52993e-ec9a-4d6a-840c-aaf962aea203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00117|binding|INFO|Claiming lport 84301772-f4d5-42b6-bb8d-a3217c3c9135 for this chassis.
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00118|binding|INFO|84301772-f4d5-42b6-bb8d-a3217c3c9135: Claiming fa:16:3e:a9:d2:17 10.100.0.13
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.175 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d2:17 10.100.0.13'], port_security=['fa:16:3e:a9:d2:17 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c42974d1-ca42-4b24-bf99-14f43ee59916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2591d563-0f24-454c-a7d6-5a800a4529e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6755368d-a77d-4335-bd69-0f08f2712850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa973d1f-4349-4977-a256-bb28e0fe00db, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=84301772-f4d5-42b6-bb8d-a3217c3c9135) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1811] device (tap84301772-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.1819] device (tap84301772-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.183 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.185 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf2955a-115a-4d3c-ba92-bf0641559040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00119|binding|INFO|Setting lport 84301772-f4d5-42b6-bb8d-a3217c3c9135 ovn-installed in OVS
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00120|binding|INFO|Setting lport 84301772-f4d5-42b6-bb8d-a3217c3c9135 up in Southbound
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.189 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 systemd-machined[155878]: New machine qemu-11-instance-00000007.
Dec  2 17:14:39 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000007.
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.220 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e891b2-3b02-447d-9714-0bd93700d497]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 systemd-machined[155878]: New machine qemu-12-instance-0000000b.
Dec  2 17:14:39 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.259 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[65d942c5-074e-4f72-ba77-4b5045f7ebda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.2725] manager: (tap5882ec1f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.271 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e35e5c41-9f8e-4dba-8a80-94beb701b82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.310 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[0f062a3b-7f2d-4d89-a193-9998aa98f253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.313 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[463b65f0-0618-4a39-ab7c-c94b15f78545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.3400] device (tap5882ec1f-b0): carrier: link connected
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.345 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[50910ed6-fc90-462e-b7b8-5bed2a109406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.367 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f1820a57-8af6-4230-8295-69d11a6e9a94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5882ec1f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:3e:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522787, 'reachable_time': 16313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252356, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.387 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4c4d14-01eb-437b-86d8-36eb9f7289e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3ee1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522787, 'tstamp': 522787}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252358, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.402 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d511f3a7-ae06-4880-991c-ab68ef42e2fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5882ec1f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:3e:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522787, 'reachable_time': 16313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252359, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.442 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[16490cec-4f52-46d8-9e24-26839fe8caf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.548 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2d29aa19-128d-4678-899b-c1d48654532e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.551 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5882ec1f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.551 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.551 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5882ec1f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:39 compute-0 kernel: tap5882ec1f-b0: entered promiscuous mode
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.554 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 NetworkManager[56503]: <info>  [1764695679.5552] manager: (tap5882ec1f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.559 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.564 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5882ec1f-b0, col_values=(('external_ids', {'iface-id': '2b400733-be6e-4881-b4c2-791cab786045'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.565 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_controller[97975]: 2025-12-02T17:14:39Z|00121|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.568 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.569 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.570 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3ac47-1469-41fd-ad03-da91f5e236e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.570 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-5882ec1f-b595-4c00-871f-f9ec4c7212bd
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/5882ec1f-b595-4c00-871f-f9ec4c7212bd.pid.haproxy
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 5882ec1f-b595-4c00-871f-f9ec4c7212bd
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:14:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:39.571 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'env', 'PROCESS_TAG=haproxy-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5882ec1f-b595-4c00-871f-f9ec4c7212bd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.580 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.855 189463 DEBUG nova.virt.libvirt.host [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Removed pending event for 4994ed6b-5e0c-4061-a84c-f46ccf29489f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.855 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695679.8542252, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.855 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.868 189463 DEBUG nova.compute.manager [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.875 189463 INFO nova.virt.libvirt.driver [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance rebooted successfully.#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.876 189463 DEBUG nova.compute.manager [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.880 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.891 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.929 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.930 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695679.8678086, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.930 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Started (Lifecycle Event)#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.958 189463 DEBUG oslo_concurrency.lockutils [None req-8061822b-2fca-4d29-b0c0-23faeffea084 c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.962 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.968 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.987 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695679.9601867, c42974d1-ca42-4b24-bf99-14f43ee59916 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:39 compute-0 nova_compute[189459]: 2025-12-02 17:14:39.988 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] VM Started (Lifecycle Event)#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.003 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.011 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695679.960336, c42974d1-ca42-4b24-bf99-14f43ee59916 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.012 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.029 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.039 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.075 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:40 compute-0 podman[252404]: 2025-12-02 17:14:40.087797421 +0000 UTC m=+0.111627192 container create 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  2 17:14:40 compute-0 podman[252404]: 2025-12-02 17:14:40.044583189 +0000 UTC m=+0.068413010 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:14:40 compute-0 systemd[1]: Started libpod-conmon-6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab.scope.
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.170 189463 DEBUG nova.network.neutron [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updated VIF entry in instance network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.171 189463 DEBUG nova.network.neutron [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:40 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.191 189463 DEBUG oslo_concurrency.lockutils [req-28af6e45-8ccf-4735-9f57-2ff3ef0f55b4 req-55a41cce-1367-4807-96ab-1e768abe411d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af576b7006e13d2e9c814e4cec8e81d2dbc56af3a1fdeeb56d1609011f1ec06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:14:40 compute-0 podman[252404]: 2025-12-02 17:14:40.21598784 +0000 UTC m=+0.239817621 container init 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:14:40 compute-0 podman[252404]: 2025-12-02 17:14:40.232167158 +0000 UTC m=+0.255996919 container start 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:14:40 compute-0 ovn_controller[97975]: 2025-12-02T17:14:40Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:e4:18 10.100.0.8
Dec  2 17:14:40 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [NOTICE]   (252422) : New worker (252424) forked
Dec  2 17:14:40 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [NOTICE]   (252422) : Loading success.
Dec  2 17:14:40 compute-0 ovn_controller[97975]: 2025-12-02T17:14:40Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:e4:18 10.100.0.8
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.304 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 84301772-f4d5-42b6-bb8d-a3217c3c9135 in datapath 2591d563-0f24-454c-a7d6-5a800a4529e5 unbound from our chassis#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.306 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2591d563-0f24-454c-a7d6-5a800a4529e5#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.315 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[8ded0e50-aaec-494b-99b2-4d07238a7de8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.316 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2591d563-01 in ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.318 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2591d563-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.318 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd8d212-056c-4297-a0d8-325cad55858b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.319 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[4dccacb8-e301-415a-a22b-a0ac42a5de7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.332 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[46f9d8fb-bee4-4681-ac06-594d812fbecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.356 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[db2bc6b9-5115-4e3f-8a0a-fa7a67074d7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.388 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[7558cc72-ee40-4bc1-a243-1cd41b19ee1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.398 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5b076ce0-ab32-47a7-b91a-3390c4682c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 systemd-udevd[252338]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:14:40 compute-0 NetworkManager[56503]: <info>  [1764695680.4011] manager: (tap2591d563-00): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.441 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc98921-a09c-4947-80d6-ff6dc5fe76d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.445 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0985cd-aaf7-4396-9f2f-119cff4f05ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 podman[252437]: 2025-12-02 17:14:40.455120622 +0000 UTC m=+0.082335858 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:14:40 compute-0 podman[252438]: 2025-12-02 17:14:40.461236854 +0000 UTC m=+0.080770397 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:14:40 compute-0 NetworkManager[56503]: <info>  [1764695680.4739] device (tap2591d563-00): carrier: link connected
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.482 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[3229e01a-3dcf-4107-a9c2-3ba8759e64d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 podman[252436]: 2025-12-02 17:14:40.499048763 +0000 UTC m=+0.132750980 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.502 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb2b467-029b-45a4-95f5-05cc128c989a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2591d563-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:3d:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522900, 'reachable_time': 29759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252511, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.518 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3749c8-1d9e-42a8-b03b-efe03fb4f8d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:3d24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522900, 'tstamp': 522900}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252512, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.541 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a8282e-1a61-4ff5-af33-6c183800570d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2591d563-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:3d:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522900, 'reachable_time': 29759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252513, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.573 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[56a94fb0-5588-43da-80d0-e2416cfcc6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.653 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ff295e84-4324-4921-8c1e-a5576d1cd193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.655 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2591d563-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.656 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.656 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2591d563-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:40 compute-0 kernel: tap2591d563-00: entered promiscuous mode
Dec  2 17:14:40 compute-0 NetworkManager[56503]: <info>  [1764695680.6647] manager: (tap2591d563-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.666 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.675 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2591d563-00, col_values=(('external_ids', {'iface-id': '089cea48-dae2-41a3-a3af-07863c5f0392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:14:40 compute-0 ovn_controller[97975]: 2025-12-02T17:14:40Z|00122|binding|INFO|Releasing lport 089cea48-dae2-41a3-a3af-07863c5f0392 from this chassis (sb_readonly=0)
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.678 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:40 compute-0 nova_compute[189459]: 2025-12-02 17:14:40.699 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.699 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2591d563-0f24-454c-a7d6-5a800a4529e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2591d563-0f24-454c-a7d6-5a800a4529e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.702 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e0263947-92f6-4c1b-bb20-36f32e8d9322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.703 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-2591d563-0f24-454c-a7d6-5a800a4529e5
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/2591d563-0f24-454c-a7d6-5a800a4529e5.pid.haproxy
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 2591d563-0f24-454c-a7d6-5a800a4529e5
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:14:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:14:40.704 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'env', 'PROCESS_TAG=haproxy-2591d563-0f24-454c-a7d6-5a800a4529e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2591d563-0f24-454c-a7d6-5a800a4529e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.148 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.149 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.149 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.150 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.150 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.151 189463 WARNING nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.151 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.152 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.153 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.153 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.154 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.154 189463 WARNING nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.154 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.155 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.159 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.159 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.159 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.160 189463 WARNING nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.161 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.161 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.161 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.162 189463 DEBUG oslo_concurrency.lockutils [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.162 189463 DEBUG nova.compute.manager [req-0ee31f4d-abbf-4154-9fc3-edc0fe51a758 req-fed05a4c-19c9-4182-aa67-4da07a9a6509 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Processing event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.164 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.183 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695681.1725998, c42974d1-ca42-4b24-bf99-14f43ee59916 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.184 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.187 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.196 189463 INFO nova.virt.libvirt.driver [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Instance spawned successfully.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.197 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:14:41 compute-0 podman[252545]: 2025-12-02 17:14:41.274452502 +0000 UTC m=+0.137628759 container create d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 17:14:41 compute-0 podman[252545]: 2025-12-02 17:14:41.225272302 +0000 UTC m=+0.088448559 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:14:41 compute-0 systemd[1]: Started libpod-conmon-d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9.scope.
Dec  2 17:14:41 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab051721dd85874a67e6f450ced4a8c9198ec212e42cfccf523f40a811f47f0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:14:41 compute-0 podman[252545]: 2025-12-02 17:14:41.378327038 +0000 UTC m=+0.241503305 container init d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.387 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 podman[252545]: 2025-12-02 17:14:41.388589529 +0000 UTC m=+0.251765786 container start d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.388 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.389 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.390 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.391 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.391 189463 DEBUG nova.virt.libvirt.driver [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.395 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.400 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:14:41 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [NOTICE]   (252565) : New worker (252567) forked
Dec  2 17:14:41 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [NOTICE]   (252565) : Loading success.
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.419 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.443 189463 INFO nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Took 10.95 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.443 189463 DEBUG nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.502 189463 INFO nova.compute.manager [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Took 11.65 seconds to build instance.#033[00m
Dec  2 17:14:41 compute-0 nova_compute[189459]: 2025-12-02 17:14:41.521 189463 DEBUG oslo_concurrency.lockutils [None req-06e045ab-4249-4797-8945-353bcc7743e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.270 189463 DEBUG nova.compute.manager [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.270 189463 DEBUG oslo_concurrency.lockutils [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.271 189463 DEBUG oslo_concurrency.lockutils [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.271 189463 DEBUG oslo_concurrency.lockutils [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.271 189463 DEBUG nova.compute.manager [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] No waiting events found dispatching network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.271 189463 WARNING nova.compute.manager [req-4709b6f3-859d-44ba-a196-3f2f66093d4c req-2dddbe5a-43eb-4f6a-8f1b-b34cec5e665e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received unexpected event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.327 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:43 compute-0 nova_compute[189459]: 2025-12-02 17:14:43.995 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:44 compute-0 nova_compute[189459]: 2025-12-02 17:14:44.410 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:46 compute-0 nova_compute[189459]: 2025-12-02 17:14:46.223 189463 DEBUG nova.compute.manager [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:14:46 compute-0 nova_compute[189459]: 2025-12-02 17:14:46.224 189463 DEBUG nova.compute.manager [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing instance network info cache due to event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:14:46 compute-0 nova_compute[189459]: 2025-12-02 17:14:46.224 189463 DEBUG oslo_concurrency.lockutils [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:14:46 compute-0 nova_compute[189459]: 2025-12-02 17:14:46.224 189463 DEBUG oslo_concurrency.lockutils [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:14:46 compute-0 nova_compute[189459]: 2025-12-02 17:14:46.224 189463 DEBUG nova.network.neutron [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.330 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.436 189463 DEBUG nova.network.neutron [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updated VIF entry in instance network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.437 189463 DEBUG nova.network.neutron [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.456 189463 DEBUG oslo_concurrency.lockutils [req-5c695ac8-c837-4ac6-bba1-db6dca74d083 req-a2abd08a-70d6-4f30-816a-b4fcc41003ce b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.490 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:48 compute-0 nova_compute[189459]: 2025-12-02 17:14:48.998 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:53 compute-0 podman[252578]: 2025-12-02 17:14:53.286590335 +0000 UTC m=+0.105390188 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  2 17:14:53 compute-0 nova_compute[189459]: 2025-12-02 17:14:53.333 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:54 compute-0 nova_compute[189459]: 2025-12-02 17:14:54.003 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:55 compute-0 ovn_controller[97975]: 2025-12-02T17:14:55Z|00123|binding|INFO|Releasing lport dec4099c-2b77-4702-ba34-4381a59eb57f from this chassis (sb_readonly=0)
Dec  2 17:14:55 compute-0 ovn_controller[97975]: 2025-12-02T17:14:55Z|00124|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:14:55 compute-0 ovn_controller[97975]: 2025-12-02T17:14:55Z|00125|binding|INFO|Releasing lport 089cea48-dae2-41a3-a3af-07863c5f0392 from this chassis (sb_readonly=0)
Dec  2 17:14:55 compute-0 nova_compute[189459]: 2025-12-02 17:14:55.406 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:58 compute-0 podman[252599]: 2025-12-02 17:14:58.298467509 +0000 UTC m=+0.116467020 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  2 17:14:58 compute-0 podman[252600]: 2025-12-02 17:14:58.316267839 +0000 UTC m=+0.123151086 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 17:14:58 compute-0 nova_compute[189459]: 2025-12-02 17:14:58.336 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:59 compute-0 nova_compute[189459]: 2025-12-02 17:14:59.007 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:14:59 compute-0 podman[203941]: time="2025-12-02T17:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:14:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec  2 17:14:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5712 "" "Go-http-client/1.1"
Dec  2 17:15:00 compute-0 ovn_controller[97975]: 2025-12-02T17:15:00Z|00126|binding|INFO|Releasing lport dec4099c-2b77-4702-ba34-4381a59eb57f from this chassis (sb_readonly=0)
Dec  2 17:15:00 compute-0 ovn_controller[97975]: 2025-12-02T17:15:00Z|00127|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:15:00 compute-0 ovn_controller[97975]: 2025-12-02T17:15:00Z|00128|binding|INFO|Releasing lport 089cea48-dae2-41a3-a3af-07863c5f0392 from this chassis (sb_readonly=0)
Dec  2 17:15:00 compute-0 nova_compute[189459]: 2025-12-02 17:15:00.427 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: ERROR   17:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: ERROR   17:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: ERROR   17:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: ERROR   17:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: ERROR   17:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:15:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:15:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:01.886 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:01.887 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:01.888 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:02 compute-0 podman[252637]: 2025-12-02 17:15:02.279486671 +0000 UTC m=+0.104335539 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:15:02 compute-0 podman[252638]: 2025-12-02 17:15:02.292132195 +0000 UTC m=+0.114102887 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public)
Dec  2 17:15:02 compute-0 podman[252639]: 2025-12-02 17:15:02.303060804 +0000 UTC m=+0.105974733 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.055 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.055 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.066 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.067 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4994ed6b-5e0c-4061-a84c-f46ccf29489f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 17:15:03 compute-0 nova_compute[189459]: 2025-12-02 17:15:03.339 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.950 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1977 Content-Type: application/json Date: Tue, 02 Dec 2025 17:15:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f39ede14-e615-4556-be3a-63cf64f80b93 x-openstack-request-id: req-f39ede14-e615-4556-be3a-63cf64f80b93 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.950 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4994ed6b-5e0c-4061-a84c-f46ccf29489f", "name": "tempest-ServerActionsTestJSON-server-254489110", "status": "ACTIVE", "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "user_id": "c800961435cb4a418a6ee67240a574fe", "metadata": {}, "hostId": "c7dc1f5d407c6b7da809e88e75701b5b5e1f54a841ba209241dab9a7", "image": {"id": "b90f8403-6db1-4b01-bb62-c5b878a5c904", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b90f8403-6db1-4b01-bb62-c5b878a5c904"}]}, "flavor": {"id": "8e4a4b21-ee56-489d-aeb9-f21b8412f996", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8e4a4b21-ee56-489d-aeb9-f21b8412f996"}]}, "created": "2025-12-02T17:13:04Z", "updated": "2025-12-02T17:14:39Z", "addresses": {"tempest-ServerActionsTestJSON-332004562-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:76:b9"}, {"version": 4, "addr": "192.168.122.225", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:76:b9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4994ed6b-5e0c-4061-a84c-f46ccf29489f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4994ed6b-5e0c-4061-a84c-f46ccf29489f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-508494976", "OS-SRV-USG:launched_at": "2025-12-02T17:13:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--200071993"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.950 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4994ed6b-5e0c-4061-a84c-f46ccf29489f used request id req-f39ede14-e615-4556-be3a-63cf64f80b93 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.952 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4994ed6b-5e0c-4061-a84c-f46ccf29489f', 'name': 'tempest-ServerActionsTestJSON-server-254489110', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95abfdbd702a49dc89fc01dd45a4e014', 'user_id': 'c800961435cb4a418a6ee67240a574fe', 'hostId': 'c7dc1f5d407c6b7da809e88e75701b5b5e1f54a841ba209241dab9a7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.956 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7ef2cae4-13df-469d-8820-5435724f49c5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 17:15:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:03.957 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7ef2cae4-13df-469d-8820-5435724f49c5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 17:15:04 compute-0 nova_compute[189459]: 2025-12-02 17:15:04.010 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:04.017 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:15:04 compute-0 nova_compute[189459]: 2025-12-02 17:15:04.018 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:04.019 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.872 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1994 Content-Type: application/json Date: Tue, 02 Dec 2025 17:15:03 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5143d3cb-0418-40be-b4d1-ae699053f719 x-openstack-request-id: req-5143d3cb-0418-40be-b4d1-ae699053f719 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.872 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7ef2cae4-13df-469d-8820-5435724f49c5", "name": "tempest-AttachInterfacesUnderV243Test-server-1092142075", "status": "ACTIVE", "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "user_id": "3508c10574e947d4ac9984098e029d62", "metadata": {}, "hostId": "9084cef1e7b10239eb463b70717ea6807411a659daf2da0e0a35e7b2", "image": {"id": "b90f8403-6db1-4b01-bb62-c5b878a5c904", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b90f8403-6db1-4b01-bb62-c5b878a5c904"}]}, "flavor": {"id": "8e4a4b21-ee56-489d-aeb9-f21b8412f996", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8e4a4b21-ee56-489d-aeb9-f21b8412f996"}]}, "created": "2025-12-02T17:13:54Z", "updated": "2025-12-02T17:14:04Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-271618246-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d3:e4:18"}, {"version": 4, "addr": "192.168.122.180", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d3:e4:18"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7ef2cae4-13df-469d-8820-5435724f49c5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7ef2cae4-13df-469d-8820-5435724f49c5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-310645273", "OS-SRV-USG:launched_at": "2025-12-02T17:14:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--251194204"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.872 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7ef2cae4-13df-469d-8820-5435724f49c5 used request id req-5143d3cb-0418-40be-b4d1-ae699053f719 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.874 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7ef2cae4-13df-469d-8820-5435724f49c5', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1092142075', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f70c98cac9964fff961eb6a5439591fc', 'user_id': '3508c10574e947d4ac9984098e029d62', 'hostId': '9084cef1e7b10239eb463b70717ea6807411a659daf2da0e0a35e7b2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.879 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c42974d1-ca42-4b24-bf99-14f43ee59916 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 17:15:04 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:04.881 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c42974d1-ca42-4b24-bf99-14f43ee59916 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.385 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Tue, 02 Dec 2025 17:15:04 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-070c372a-c6e9-4c2e-9a07-6a100445089e x-openstack-request-id: req-070c372a-c6e9-4c2e-9a07-6a100445089e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.386 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c42974d1-ca42-4b24-bf99-14f43ee59916", "name": "tempest-TestNetworkBasicOps-server-1141860031", "status": "ACTIVE", "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "user_id": "ed4b2c7904414b1cb5c9314cf52d7eff", "metadata": {}, "hostId": "30a68427eb868a6a54ded05ff1d74bf8b8a49a42c7d2ae3785d2ac0c", "image": {"id": "b90f8403-6db1-4b01-bb62-c5b878a5c904", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b90f8403-6db1-4b01-bb62-c5b878a5c904"}]}, "flavor": {"id": "8e4a4b21-ee56-489d-aeb9-f21b8412f996", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8e4a4b21-ee56-489d-aeb9-f21b8412f996"}]}, "created": "2025-12-02T17:14:29Z", "updated": "2025-12-02T17:14:41Z", "addresses": {"tempest-network-smoke--1256485445": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:d2:17"}, {"version": 4, "addr": "192.168.122.236", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:d2:17"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c42974d1-ca42-4b24-bf99-14f43ee59916"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c42974d1-ca42-4b24-bf99-14f43ee59916"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-292184376", "OS-SRV-USG:launched_at": "2025-12-02T17:14:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-747052570"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.386 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c42974d1-ca42-4b24-bf99-14f43ee59916 used request id req-070c372a-c6e9-4c2e-9a07-6a100445089e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.389 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c42974d1-ca42-4b24-bf99-14f43ee59916', 'name': 'tempest-TestNetworkBasicOps-server-1141860031', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'user_id': 'ed4b2c7904414b1cb5c9314cf52d7eff', 'hostId': '30a68427eb868a6a54ded05ff1d74bf8b8a49a42c7d2ae3785d2ac0c', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:15:05.392065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.399 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4994ed6b-5e0c-4061-a84c-f46ccf29489f / tap5f7c429b-02 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.400 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.408 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7ef2cae4-13df-469d-8820-5435724f49c5 / tap6642128c-0b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.409 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.416 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c42974d1-ca42-4b24-bf99-14f43ee59916 / tap84301772-f4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.416 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.418 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.419 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:15:05.419180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.420 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.420 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.421 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:15:05.422402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.445 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.447 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.471 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.472 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.492 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.493 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.496 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:15:05.497950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.527 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/cpu volume: 25030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.556 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/cpu volume: 35550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.577 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/cpu volume: 23870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.579 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:15:05.579776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.620 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.621 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.700 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.bytes volume: 29596160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.701 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.758 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.758 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.760 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.762 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.latency volume: 445949973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.762 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.latency volume: 955095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.763 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.latency volume: 712657323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:15:05.761553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.764 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.latency volume: 59596227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.765 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.latency volume: 513931902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.765 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.latency volume: 1552911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.766 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.767 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.767 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.767 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.768 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.768 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:15:05.768220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.769 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.769 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.770 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.requests volume: 1065 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.771 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.771 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.772 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.774 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.774 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:15:05.775175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.775 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.775 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.776 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.777 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.777 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.778 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.779 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.780 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.782 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.783 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:15:05.781778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.783 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.784 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.785 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.785 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.787 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:15:05.788512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.789 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.789 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.790 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.bytes volume: 72929280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.791 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.791 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.792 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.794 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:15:05.794903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.795 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.795 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.795 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.796 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.latency volume: 3125573362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.796 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.796 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.797 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.798 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.799 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.799 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.800 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.801 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.802 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.802 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.requests volume: 307 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.802 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.803 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.803 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:15:05.798667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.804 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.805 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:15:05.801438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:15:05.805428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.806 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.806 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.808 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T17:15:05.807877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.808 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-254489110>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1092142075>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1141860031>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-254489110>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1092142075>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1141860031>]
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.810 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.810 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:15:05.809613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.812 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:15:05.811571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.812 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:15:05.813649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.815 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.816 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.816 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.818 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:15:05.815632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.818 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.818 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:15:05.817903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.819 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.820 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:15:05.820202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.820 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.821 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.821 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:15:05.822474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.822 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.823 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.823 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.824 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f: ceilometer.compute.pollsters.NoVolumeException
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/memory.usage volume: 42.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:15:05.824497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance c42974d1-ca42-4b24-bf99-14f43ee59916: ceilometer.compute.pollsters.NoVolumeException
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.826 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-254489110>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1092142075>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1141860031>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-254489110>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1092142075>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1141860031>]
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T17:15:05.826342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.827 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.incoming.bytes volume: 1706 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:15:05.827615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.829 14 DEBUG ceilometer.compute.pollsters [-] 4994ed6b-5e0c-4061-a84c-f46ccf29489f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.829 14 DEBUG ceilometer.compute.pollsters [-] 7ef2cae4-13df-469d-8820-5435724f49c5/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.829 14 DEBUG ceilometer.compute.pollsters [-] c42974d1-ca42-4b24-bf99-14f43ee59916/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:15:05.828932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:05 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:15:05.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:15:08 compute-0 nova_compute[189459]: 2025-12-02 17:15:08.344 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:09 compute-0 nova_compute[189459]: 2025-12-02 17:15:09.013 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:10 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:10.021 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:11 compute-0 podman[252695]: 2025-12-02 17:15:11.25776135 +0000 UTC m=+0.077970192 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:15:11 compute-0 podman[252696]: 2025-12-02 17:15:11.287551877 +0000 UTC m=+0.088924901 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:15:11 compute-0 podman[252694]: 2025-12-02 17:15:11.357894747 +0000 UTC m=+0.175772228 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 17:15:13 compute-0 nova_compute[189459]: 2025-12-02 17:15:13.351 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:14 compute-0 nova_compute[189459]: 2025-12-02 17:15:14.017 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:14 compute-0 ovn_controller[97975]: 2025-12-02T17:15:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:76:b9 10.100.0.5
Dec  2 17:15:15 compute-0 ovn_controller[97975]: 2025-12-02T17:15:15Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:d2:17 10.100.0.13
Dec  2 17:15:15 compute-0 ovn_controller[97975]: 2025-12-02T17:15:15Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:d2:17 10.100.0.13
Dec  2 17:15:15 compute-0 nova_compute[189459]: 2025-12-02 17:15:15.455 189463 DEBUG nova.objects.instance [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lazy-loading 'flavor' on Instance uuid 7ef2cae4-13df-469d-8820-5435724f49c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:15 compute-0 nova_compute[189459]: 2025-12-02 17:15:15.511 189463 DEBUG oslo_concurrency.lockutils [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:15 compute-0 nova_compute[189459]: 2025-12-02 17:15:15.511 189463 DEBUG oslo_concurrency.lockutils [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:17 compute-0 nova_compute[189459]: 2025-12-02 17:15:17.263 189463 DEBUG nova.network.neutron [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:15:17 compute-0 nova_compute[189459]: 2025-12-02 17:15:17.435 189463 DEBUG nova.compute.manager [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:17 compute-0 nova_compute[189459]: 2025-12-02 17:15:17.436 189463 DEBUG nova.compute.manager [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing instance network info cache due to event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:15:17 compute-0 nova_compute[189459]: 2025-12-02 17:15:17.436 189463 DEBUG oslo_concurrency.lockutils [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:18 compute-0 nova_compute[189459]: 2025-12-02 17:15:18.355 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:19 compute-0 nova_compute[189459]: 2025-12-02 17:15:19.021 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.475 189463 INFO nova.compute.manager [None req-6f0d3ebc-3df1-4503-a473-5bd3a846936d ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Get console output#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.603 239820 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.814 189463 DEBUG nova.network.neutron [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.836 189463 DEBUG oslo_concurrency.lockutils [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.837 189463 DEBUG nova.compute.manager [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.838 189463 DEBUG nova.compute.manager [None req-bc7ea84d-8a88-4d4d-9869-a65e8fe9016c 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] network_info to inject: |[{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.842 189463 DEBUG oslo_concurrency.lockutils [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:21 compute-0 nova_compute[189459]: 2025-12-02 17:15:21.843 189463 DEBUG nova.network.neutron [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.359 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.459 189463 DEBUG nova.objects.instance [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lazy-loading 'flavor' on Instance uuid 7ef2cae4-13df-469d-8820-5435724f49c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.486 189463 DEBUG oslo_concurrency.lockutils [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.538 189463 DEBUG nova.network.neutron [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updated VIF entry in instance network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.539 189463 DEBUG nova.network.neutron [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.554 189463 DEBUG oslo_concurrency.lockutils [req-1336e2d6-97ae-4e01-9ff1-6254a09ef0c5 req-5275f59f-16b4-43a9-b763-fbd999a6ee1f b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.556 189463 DEBUG oslo_concurrency.lockutils [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.661 189463 DEBUG nova.compute.manager [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.662 189463 DEBUG nova.compute.manager [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing instance network info cache due to event network-changed-84301772-f4d5-42b6-bb8d-a3217c3c9135. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.662 189463 DEBUG oslo_concurrency.lockutils [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.663 189463 DEBUG oslo_concurrency.lockutils [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:23 compute-0 nova_compute[189459]: 2025-12-02 17:15:23.663 189463 DEBUG nova.network.neutron [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Refreshing network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:15:24 compute-0 nova_compute[189459]: 2025-12-02 17:15:24.024 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:24 compute-0 podman[252781]: 2025-12-02 17:15:24.313187044 +0000 UTC m=+0.138327948 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7)
Dec  2 17:15:24 compute-0 nova_compute[189459]: 2025-12-02 17:15:24.791 189463 DEBUG nova.network.neutron [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:15:24 compute-0 nova_compute[189459]: 2025-12-02 17:15:24.907 189463 DEBUG nova.compute.manager [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:24 compute-0 nova_compute[189459]: 2025-12-02 17:15:24.907 189463 DEBUG nova.compute.manager [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing instance network info cache due to event network-changed-6642128c-0bde-4b10-95e2-8c6fd2e666fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:15:24 compute-0 nova_compute[189459]: 2025-12-02 17:15:24.908 189463 DEBUG oslo_concurrency.lockutils [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:25 compute-0 nova_compute[189459]: 2025-12-02 17:15:25.374 189463 DEBUG nova.network.neutron [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updated VIF entry in instance network info cache for port 84301772-f4d5-42b6-bb8d-a3217c3c9135. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:15:25 compute-0 nova_compute[189459]: 2025-12-02 17:15:25.374 189463 DEBUG nova.network.neutron [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:25 compute-0 nova_compute[189459]: 2025-12-02 17:15:25.396 189463 DEBUG oslo_concurrency.lockutils [req-521bdd54-5169-4042-b361-33e97399241f req-e5a86062-0b92-4263-b8b6-3b1df127fc0d b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.141 189463 DEBUG nova.network.neutron [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.183 189463 DEBUG oslo_concurrency.lockutils [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.183 189463 DEBUG nova.compute.manager [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.183 189463 DEBUG nova.compute.manager [None req-e4ccf25f-ed71-4ce4-ba15-fc86a7af7006 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] network_info to inject: |[{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.185 189463 DEBUG oslo_concurrency.lockutils [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:26 compute-0 nova_compute[189459]: 2025-12-02 17:15:26.186 189463 DEBUG nova.network.neutron [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Refreshing network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.079 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.081 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.081 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.082 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.083 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.086 189463 INFO nova.compute.manager [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Terminating instance#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.088 189463 DEBUG nova.compute.manager [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:15:27 compute-0 kernel: tap6642128c-0b (unregistering): left promiscuous mode
Dec  2 17:15:27 compute-0 NetworkManager[56503]: <info>  [1764695727.1288] device (tap6642128c-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.138 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 ovn_controller[97975]: 2025-12-02T17:15:27Z|00129|binding|INFO|Releasing lport 6642128c-0bde-4b10-95e2-8c6fd2e666fc from this chassis (sb_readonly=0)
Dec  2 17:15:27 compute-0 ovn_controller[97975]: 2025-12-02T17:15:27Z|00130|binding|INFO|Setting lport 6642128c-0bde-4b10-95e2-8c6fd2e666fc down in Southbound
Dec  2 17:15:27 compute-0 ovn_controller[97975]: 2025-12-02T17:15:27Z|00131|binding|INFO|Removing iface tap6642128c-0b ovn-installed in OVS
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.142 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.147 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:e4:18 10.100.0.8'], port_security=['fa:16:3e:d3:e4:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7ef2cae4-13df-469d-8820-5435724f49c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a320061d-433a-4deb-901d-3feb7979c906', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f70c98cac9964fff961eb6a5439591fc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '798c1fb9-ee0e-49ab-b9b3-41e9074e219f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b23f6271-ee6c-46aa-a698-b66eef1ab937, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=6642128c-0bde-4b10-95e2-8c6fd2e666fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.149 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 6642128c-0bde-4b10-95e2-8c6fd2e666fc in datapath a320061d-433a-4deb-901d-3feb7979c906 unbound from our chassis#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.151 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a320061d-433a-4deb-901d-3feb7979c906, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.153 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f24538b1-f7d9-4ef1-a777-0f2165da0620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.154 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a320061d-433a-4deb-901d-3feb7979c906 namespace which is not needed anymore#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.161 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  2 17:15:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 43.822s CPU time.
Dec  2 17:15:27 compute-0 systemd-machined[155878]: Machine qemu-9-instance-00000009 terminated.
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [NOTICE]   (251728) : haproxy version is 2.8.14-c23fe91
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [NOTICE]   (251728) : path to executable is /usr/sbin/haproxy
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [WARNING]  (251728) : Exiting Master process...
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [WARNING]  (251728) : Exiting Master process...
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [ALERT]    (251728) : Current worker (251730) exited with code 143 (Terminated)
Dec  2 17:15:27 compute-0 neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906[251723]: [WARNING]  (251728) : All workers exited. Exiting... (0)
Dec  2 17:15:27 compute-0 systemd[1]: libpod-7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5.scope: Deactivated successfully.
Dec  2 17:15:27 compute-0 podman[252824]: 2025-12-02 17:15:27.348376292 +0000 UTC m=+0.073481484 container died 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 17:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5-userdata-shm.mount: Deactivated successfully.
Dec  2 17:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe3e2ed6fb53cf8a32c64684c83f0b067aebe26776629ff068bb9f164f98d597-merged.mount: Deactivated successfully.
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.388 189463 INFO nova.virt.libvirt.driver [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Instance destroyed successfully.#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.389 189463 DEBUG nova.objects.instance [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lazy-loading 'resources' on Instance uuid 7ef2cae4-13df-469d-8820-5435724f49c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.405 189463 DEBUG nova.virt.libvirt.vif [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1092142075',display_name='tempest-AttachInterfacesUnderV243Test-server-1092142075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1092142075',id=9,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00myeHNTP3xWyB/JPmEMDPJD/Z3tL0Gl6ipVjy90/2iHk+En9ILFQGTf5rDJoEl55ATekTFAiHehQR6buTg8Xf9pptQNp27v9TvP4zRTlRv81Vpao2vmAwLMvFdE1dKw==',key_name='tempest-keypair-310645273',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:14:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f70c98cac9964fff961eb6a5439591fc',ramdisk_id='',reservation_id='r-djyqhxse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1844515369',owner_user_name='tempest-AttachInterfacesUnderV243Test-1844515369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:15:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3508c10574e947d4ac9984098e029d62',uuid=7ef2cae4-13df-469d-8820-5435724f49c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.406 189463 DEBUG nova.network.os_vif_util [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converting VIF {"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.407 189463 DEBUG nova.network.os_vif_util [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.408 189463 DEBUG os_vif [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.410 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.411 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6642128c-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.413 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.417 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.420 189463 INFO os_vif [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:e4:18,bridge_name='br-int',has_traffic_filtering=True,id=6642128c-0bde-4b10-95e2-8c6fd2e666fc,network=Network(a320061d-433a-4deb-901d-3feb7979c906),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6642128c-0b')#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.421 189463 INFO nova.virt.libvirt.driver [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Deleting instance files /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5_del#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.422 189463 INFO nova.virt.libvirt.driver [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Deletion of /var/lib/nova/instances/7ef2cae4-13df-469d-8820-5435724f49c5_del complete#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.584 189463 DEBUG nova.compute.manager [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-unplugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.585 189463 DEBUG oslo_concurrency.lockutils [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.585 189463 DEBUG oslo_concurrency.lockutils [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.585 189463 DEBUG oslo_concurrency.lockutils [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.586 189463 DEBUG nova.compute.manager [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] No waiting events found dispatching network-vif-unplugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.586 189463 DEBUG nova.compute.manager [req-e30a2557-987c-4738-934c-591a5dc3e38d req-2aadef95-4853-4a37-8c7b-e7de3a144b41 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-unplugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.611 189463 INFO nova.compute.manager [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.612 189463 DEBUG oslo.service.loopingcall [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.613 189463 DEBUG nova.compute.manager [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.614 189463 DEBUG nova.network.neutron [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:15:27 compute-0 podman[252824]: 2025-12-02 17:15:27.787005187 +0000 UTC m=+0.512110409 container cleanup 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:15:27 compute-0 systemd[1]: libpod-conmon-7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5.scope: Deactivated successfully.
Dec  2 17:15:27 compute-0 podman[252869]: 2025-12-02 17:15:27.90627762 +0000 UTC m=+0.083122238 container remove 7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.916 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ec76ce-3117-4f81-a87f-e5b5223a7cf0]: (4, ('Tue Dec  2 05:15:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906 (7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5)\n7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5\nTue Dec  2 05:15:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a320061d-433a-4deb-901d-3feb7979c906 (7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5)\n7d68d33eb275eeb3ec074d6e61341afe871abd5b1bab6029fc70dd1e822851e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.918 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[debefb19-f813-4c63-8f21-67a1d86cef47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.919 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa320061d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:27 compute-0 kernel: tapa320061d-40: left promiscuous mode
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.922 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 nova_compute[189459]: 2025-12-02 17:15:27.934 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.939 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[64947be2-b564-431e-8b78-6bb35969b786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.958 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a02b9cfd-1abf-4128-8b95-c3486606dc22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.959 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[897aa68b-6714-4ba3-84c8-ecd249831882]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.976 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ee9c2a-1998-4ec1-be7f-b34a0e6b375d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519070, 'reachable_time': 16242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252883, 'error': None, 'target': 'ovnmeta-a320061d-433a-4deb-901d-3feb7979c906', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:27 compute-0 systemd[1]: run-netns-ovnmeta\x2da320061d\x2d433a\x2d4deb\x2d901d\x2d3feb7979c906.mount: Deactivated successfully.
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.982 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a320061d-433a-4deb-901d-3feb7979c906 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:15:27 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:27.982 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[f9070a1b-7cb5-4bb2-a581-dea8c3c98690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:28 compute-0 nova_compute[189459]: 2025-12-02 17:15:28.163 189463 DEBUG nova.network.neutron [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updated VIF entry in instance network info cache for port 6642128c-0bde-4b10-95e2-8c6fd2e666fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:15:28 compute-0 nova_compute[189459]: 2025-12-02 17:15:28.164 189463 DEBUG nova.network.neutron [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [{"id": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "address": "fa:16:3e:d3:e4:18", "network": {"id": "a320061d-433a-4deb-901d-3feb7979c906", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-271618246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f70c98cac9964fff961eb6a5439591fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6642128c-0b", "ovs_interfaceid": "6642128c-0bde-4b10-95e2-8c6fd2e666fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:28 compute-0 nova_compute[189459]: 2025-12-02 17:15:28.184 189463 DEBUG oslo_concurrency.lockutils [req-3625f0ef-3bd5-4654-b423-38dec692ebdc req-29c42219-2c67-4ac6-9fd5-db6746966abc b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-7ef2cae4-13df-469d-8820-5435724f49c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:28 compute-0 nova_compute[189459]: 2025-12-02 17:15:28.361 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:28 compute-0 nova_compute[189459]: 2025-12-02 17:15:28.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.077 189463 DEBUG nova.network.neutron [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.217 189463 INFO nova.compute.manager [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Took 1.60 seconds to deallocate network for instance.#033[00m
Dec  2 17:15:29 compute-0 podman[252885]: 2025-12-02 17:15:29.262085043 +0000 UTC m=+0.082772439 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:15:29 compute-0 podman[252884]: 2025-12-02 17:15:29.273765182 +0000 UTC m=+0.084198597 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.374 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.375 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.509 189463 DEBUG nova.compute.provider_tree [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.548 189463 DEBUG nova.scheduler.client.report [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.594 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:29 compute-0 podman[203941]: time="2025-12-02T17:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.749 189463 INFO nova.scheduler.client.report [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Deleted allocations for instance 7ef2cae4-13df-469d-8820-5435724f49c5#033[00m
Dec  2 17:15:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  2 17:15:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5245 "" "Go-http-client/1.1"
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.931 189463 DEBUG nova.compute.manager [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.931 189463 DEBUG oslo_concurrency.lockutils [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.931 189463 DEBUG oslo_concurrency.lockutils [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.932 189463 DEBUG oslo_concurrency.lockutils [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.932 189463 DEBUG nova.compute.manager [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] No waiting events found dispatching network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.933 189463 WARNING nova.compute.manager [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received unexpected event network-vif-plugged-6642128c-0bde-4b10-95e2-8c6fd2e666fc for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.933 189463 DEBUG nova.compute.manager [req-54b8568b-b225-4066-82dc-bf98539c2b4a req-77c9f1ca-5aac-4f0c-8c29-4d33390d144c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Received event network-vif-deleted-6642128c-0bde-4b10-95e2-8c6fd2e666fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:29 compute-0 nova_compute[189459]: 2025-12-02 17:15:29.939 189463 DEBUG oslo_concurrency.lockutils [None req-912fb47c-eeb8-481b-8c70-3fb5bcfc4a93 3508c10574e947d4ac9984098e029d62 f70c98cac9964fff961eb6a5439591fc - - default default] Lock "7ef2cae4-13df-469d-8820-5435724f49c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:31 compute-0 nova_compute[189459]: 2025-12-02 17:15:31.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:31 compute-0 nova_compute[189459]: 2025-12-02 17:15:31.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: ERROR   17:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: ERROR   17:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: ERROR   17:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: ERROR   17:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: ERROR   17:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:15:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:15:31 compute-0 nova_compute[189459]: 2025-12-02 17:15:31.442 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:15:31 compute-0 nova_compute[189459]: 2025-12-02 17:15:31.442 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.044 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.044 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.117 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.414 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.609 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.610 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.621 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.621 189463 INFO nova.compute.claims [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.785 189463 DEBUG nova.scheduler.client.report [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.898 189463 DEBUG nova.scheduler.client.report [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.900 189463 DEBUG nova.compute.provider_tree [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.920 189463 DEBUG nova.scheduler.client.report [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:15:32 compute-0 nova_compute[189459]: 2025-12-02 17:15:32.939 189463 DEBUG nova.scheduler.client.report [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:15:33 compute-0 nova_compute[189459]: 2025-12-02 17:15:33.017 189463 DEBUG nova.compute.provider_tree [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:15:33 compute-0 podman[252924]: 2025-12-02 17:15:33.274785482 +0000 UTC m=+0.104491344 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:15:33 compute-0 podman[252925]: 2025-12-02 17:15:33.287129778 +0000 UTC m=+0.112108535 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 17:15:33 compute-0 podman[252926]: 2025-12-02 17:15:33.291487103 +0000 UTC m=+0.109567437 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 17:15:33 compute-0 nova_compute[189459]: 2025-12-02 17:15:33.363 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:33 compute-0 nova_compute[189459]: 2025-12-02 17:15:33.687 189463 DEBUG nova.scheduler.client.report [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:15:33 compute-0 nova_compute[189459]: 2025-12-02 17:15:33.772 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:33 compute-0 nova_compute[189459]: 2025-12-02 17:15:33.772 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.026 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.026 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.216 189463 INFO nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.294 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.353 189463 DEBUG nova.policy [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed4b2c7904414b1cb5c9314cf52d7eff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.436 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.438 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.439 189463 INFO nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Creating image(s)#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.439 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.440 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.440 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.455 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.555 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.556 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.557 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.568 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.671 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.673 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.735 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk 1073741824" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.737 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.737 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.803 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.804 189463 DEBUG nova.virt.disk.api [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Checking if we can resize image /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.805 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.867 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.868 189463 DEBUG nova.virt.disk.api [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Cannot resize image /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:15:34 compute-0 nova_compute[189459]: 2025-12-02 17:15:34.869 189463 DEBUG nova.objects.instance [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'migration_context' on Instance uuid de801101-d42c-462e-98a9-7a2a649cf1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.454 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.455 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Ensure instance console log exists: /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.455 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.456 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:35 compute-0 nova_compute[189459]: 2025-12-02 17:15:35.456 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:36 compute-0 nova_compute[189459]: 2025-12-02 17:15:36.242 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Successfully created port: 37210948-7d27-4586-a367-e083ee7fd9e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.169 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Successfully updated port: 37210948-7d27-4586-a367-e083ee7fd9e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.364 189463 DEBUG nova.compute.manager [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received event network-changed-37210948-7d27-4586-a367-e083ee7fd9e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.365 189463 DEBUG nova.compute.manager [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Refreshing instance network info cache due to event network-changed-37210948-7d27-4586-a367-e083ee7fd9e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.365 189463 DEBUG oslo_concurrency.lockutils [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.366 189463 DEBUG oslo_concurrency.lockutils [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.367 189463 DEBUG nova.network.neutron [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Refreshing network info cache for port 37210948-7d27-4586-a367-e083ee7fd9e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.372 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.418 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.437 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.547 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:37 compute-0 ovn_controller[97975]: 2025-12-02T17:15:37Z|00132|binding|INFO|Releasing lport 2b400733-be6e-4881-b4c2-791cab786045 from this chassis (sb_readonly=0)
Dec  2 17:15:37 compute-0 ovn_controller[97975]: 2025-12-02T17:15:37Z|00133|binding|INFO|Releasing lport 089cea48-dae2-41a3-a3af-07863c5f0392 from this chassis (sb_readonly=0)
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.574 189463 DEBUG nova.network.neutron [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.627 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.629 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.651 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.701 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.710 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.774 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.776 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:37 compute-0 nova_compute[189459]: 2025-12-02 17:15:37.850 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.314 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.317 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5001MB free_disk=72.10185623168945GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.317 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.318 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.367 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.414 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.414 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance c42974d1-ca42-4b24-bf99-14f43ee59916 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.415 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance de801101-d42c-462e-98a9-7a2a649cf1d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.415 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.416 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.505 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.521 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.560 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.561 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.562 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.869 189463 DEBUG nova.network.neutron [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.893 189463 DEBUG oslo_concurrency.lockutils [req-f197546f-0719-49aa-91f6-496d1d8cbc98 req-6bfa0fe6-48b4-494f-911e-ad46b816e32e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.894 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquired lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:38 compute-0 nova_compute[189459]: 2025-12-02 17:15:38.895 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:15:39 compute-0 nova_compute[189459]: 2025-12-02 17:15:39.422 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:39 compute-0 nova_compute[189459]: 2025-12-02 17:15:39.424 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:39 compute-0 nova_compute[189459]: 2025-12-02 17:15:39.425 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:15:39 compute-0 nova_compute[189459]: 2025-12-02 17:15:39.452 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:15:39 compute-0 nova_compute[189459]: 2025-12-02 17:15:39.761 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:15:40 compute-0 nova_compute[189459]: 2025-12-02 17:15:40.438 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:41 compute-0 nova_compute[189459]: 2025-12-02 17:15:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:41 compute-0 nova_compute[189459]: 2025-12-02 17:15:41.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:15:42 compute-0 podman[253010]: 2025-12-02 17:15:42.295038882 +0000 UTC m=+0.110947365 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:15:42 compute-0 podman[253009]: 2025-12-02 17:15:42.297961129 +0000 UTC m=+0.114874528 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:15:42 compute-0 podman[253008]: 2025-12-02 17:15:42.34794996 +0000 UTC m=+0.172621394 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 17:15:42 compute-0 nova_compute[189459]: 2025-12-02 17:15:42.385 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695727.3842242, 7ef2cae4-13df-469d-8820-5435724f49c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:15:42 compute-0 nova_compute[189459]: 2025-12-02 17:15:42.385 189463 INFO nova.compute.manager [-] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:15:42 compute-0 nova_compute[189459]: 2025-12-02 17:15:42.406 189463 DEBUG nova.compute.manager [None req-a3a66ebd-af45-45c8-b966-52f0f6c0b84a - - - - - -] [instance: 7ef2cae4-13df-469d-8820-5435724f49c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:15:42 compute-0 nova_compute[189459]: 2025-12-02 17:15:42.421 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.152 189463 DEBUG nova.network.neutron [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Updating instance_info_cache with network_info: [{"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.199 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Releasing lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.200 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance network_info: |[{"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.204 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Start _get_guest_xml network_info=[{"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.214 189463 WARNING nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.221 189463 DEBUG nova.virt.libvirt.host [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.223 189463 DEBUG nova.virt.libvirt.host [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.228 189463 DEBUG nova.virt.libvirt.host [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.229 189463 DEBUG nova.virt.libvirt.host [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.231 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.232 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.233 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.234 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.235 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.235 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.236 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.236 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.237 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.238 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.239 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.240 189463 DEBUG nova.virt.hardware [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.246 189463 DEBUG nova.virt.libvirt.vif [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2145186315',display_name='tempest-TestNetworkBasicOps-server-2145186315',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2145186315',id=12,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuF1sAxqZQNz6u2VBqs9bVrNgdUFgdo8O7ErgmUtDS3XZRL6KPJpe8onhiHO7jhCaFNOGyPQ2NEeB8mwAPfaSSmTRNt7jy5j4A0Ns1cXKzVudtK+FyFTsPC/FXtJL7SKg==',key_name='tempest-TestNetworkBasicOps-213862819',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zoql0s7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:15:34Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=de801101-d42c-462e-98a9-7a2a649cf1d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.247 189463 DEBUG nova.network.os_vif_util [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.248 189463 DEBUG nova.network.os_vif_util [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.251 189463 DEBUG nova.objects.instance [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid de801101-d42c-462e-98a9-7a2a649cf1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.269 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <uuid>de801101-d42c-462e-98a9-7a2a649cf1d3</uuid>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <name>instance-0000000c</name>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:name>tempest-TestNetworkBasicOps-server-2145186315</nova:name>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:15:43</nova:creationTime>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:user uuid="ed4b2c7904414b1cb5c9314cf52d7eff">tempest-TestNetworkBasicOps-592268676-project-member</nova:user>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:project uuid="b5fdb2e066254ddbbd43316d1a1a75b2">tempest-TestNetworkBasicOps-592268676</nova:project>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        <nova:port uuid="37210948-7d27-4586-a367-e083ee7fd9e8">
Dec  2 17:15:43 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <system>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="serial">de801101-d42c-462e-98a9-7a2a649cf1d3</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="uuid">de801101-d42c-462e-98a9-7a2a649cf1d3</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </system>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <os>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </os>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <features>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </features>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.config"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:38:a2:b5"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <target dev="tap37210948-7d"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/console.log" append="off"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <video>
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </video>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:15:43 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:15:43 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:15:43 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:15:43 compute-0 nova_compute[189459]: </domain>
Dec  2 17:15:43 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.271 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Preparing to wait for external event network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.271 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.272 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.272 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.274 189463 DEBUG nova.virt.libvirt.vif [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2145186315',display_name='tempest-TestNetworkBasicOps-server-2145186315',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2145186315',id=12,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuF1sAxqZQNz6u2VBqs9bVrNgdUFgdo8O7ErgmUtDS3XZRL6KPJpe8onhiHO7jhCaFNOGyPQ2NEeB8mwAPfaSSmTRNt7jy5j4A0Ns1cXKzVudtK+FyFTsPC/FXtJL7SKg==',key_name='tempest-TestNetworkBasicOps-213862819',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zoql0s7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:15:34Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=de801101-d42c-462e-98a9-7a2a649cf1d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.275 189463 DEBUG nova.network.os_vif_util [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.276 189463 DEBUG nova.network.os_vif_util [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.277 189463 DEBUG os_vif [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.278 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.279 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.280 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.285 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.286 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37210948-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.287 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37210948-7d, col_values=(('external_ids', {'iface-id': '37210948-7d27-4586-a367-e083ee7fd9e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:a2:b5', 'vm-uuid': 'de801101-d42c-462e-98a9-7a2a649cf1d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:43 compute-0 NetworkManager[56503]: <info>  [1764695743.2920] manager: (tap37210948-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.292 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.294 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.302 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.304 189463 INFO os_vif [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d')#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.364 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.365 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.366 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] No VIF found with MAC fa:16:3e:38:a2:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.367 189463 INFO nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Using config drive#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.370 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.810 189463 INFO nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Creating config drive at /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.config#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.821 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjeze70l9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:15:43 compute-0 nova_compute[189459]: 2025-12-02 17:15:43.970 189463 DEBUG oslo_concurrency.processutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjeze70l9" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:15:44 compute-0 kernel: tap37210948-7d: entered promiscuous mode
Dec  2 17:15:44 compute-0 NetworkManager[56503]: <info>  [1764695744.0577] manager: (tap37210948-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.061 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:44 compute-0 ovn_controller[97975]: 2025-12-02T17:15:44Z|00134|binding|INFO|Claiming lport 37210948-7d27-4586-a367-e083ee7fd9e8 for this chassis.
Dec  2 17:15:44 compute-0 ovn_controller[97975]: 2025-12-02T17:15:44Z|00135|binding|INFO|37210948-7d27-4586-a367-e083ee7fd9e8: Claiming fa:16:3e:38:a2:b5 10.100.0.12
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.074 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a2:b5 10.100.0.12'], port_security=['fa:16:3e:38:a2:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de801101-d42c-462e-98a9-7a2a649cf1d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2591d563-0f24-454c-a7d6-5a800a4529e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'afeaf459-6d05-4cbb-9286-ae9b8108a158', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa973d1f-4349-4977-a256-bb28e0fe00db, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=37210948-7d27-4586-a367-e083ee7fd9e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.075 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 37210948-7d27-4586-a367-e083ee7fd9e8 in datapath 2591d563-0f24-454c-a7d6-5a800a4529e5 bound to our chassis#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.077 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2591d563-0f24-454c-a7d6-5a800a4529e5#033[00m
Dec  2 17:15:44 compute-0 ovn_controller[97975]: 2025-12-02T17:15:44Z|00136|binding|INFO|Setting lport 37210948-7d27-4586-a367-e083ee7fd9e8 ovn-installed in OVS
Dec  2 17:15:44 compute-0 ovn_controller[97975]: 2025-12-02T17:15:44Z|00137|binding|INFO|Setting lport 37210948-7d27-4586-a367-e083ee7fd9e8 up in Southbound
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.093 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[986e4c15-c611-46c9-8c07-86f63c482143]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 systemd-udevd[253099]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.096 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.099 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:44 compute-0 systemd-machined[155878]: New machine qemu-13-instance-0000000c.
Dec  2 17:15:44 compute-0 NetworkManager[56503]: <info>  [1764695744.1145] device (tap37210948-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:15:44 compute-0 NetworkManager[56503]: <info>  [1764695744.1154] device (tap37210948-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:15:44 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.134 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[0c198f1d-93f8-42fc-a0b7-a8f4a320576f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.138 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[3b57e603-17e6-4437-b80d-89ae07264772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.176 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d7d1b5-dd3e-4d8c-ab4a-8901408c98ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.204 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0f074983-4590-4415-970b-930677b36836]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2591d563-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:3d:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522900, 'reachable_time': 29759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253110, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.227 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f76fbd-e38a-4ada-bba6-226e9f8fca8d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2591d563-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522914, 'tstamp': 522914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253113, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2591d563-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522917, 'tstamp': 522917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253113, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.230 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2591d563-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.232 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.234 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.235 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2591d563-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.236 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.236 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2591d563-00, col_values=(('external_ids', {'iface-id': '089cea48-dae2-41a3-a3af-07863c5f0392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:44.237 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.609 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695744.6092374, de801101-d42c-462e-98a9-7a2a649cf1d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.610 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] VM Started (Lifecycle Event)#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.633 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.639 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695744.6137753, de801101-d42c-462e-98a9-7a2a649cf1d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.640 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.661 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.675 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:15:44 compute-0 nova_compute[189459]: 2025-12-02 17:15:44.704 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.374 189463 DEBUG nova.compute.manager [req-852ed4cf-a53b-42e7-998c-d5fecfe0c85a req-16ff9672-e076-4701-bc5f-62cd41c2108e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received event network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.374 189463 DEBUG oslo_concurrency.lockutils [req-852ed4cf-a53b-42e7-998c-d5fecfe0c85a req-16ff9672-e076-4701-bc5f-62cd41c2108e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.375 189463 DEBUG oslo_concurrency.lockutils [req-852ed4cf-a53b-42e7-998c-d5fecfe0c85a req-16ff9672-e076-4701-bc5f-62cd41c2108e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.375 189463 DEBUG oslo_concurrency.lockutils [req-852ed4cf-a53b-42e7-998c-d5fecfe0c85a req-16ff9672-e076-4701-bc5f-62cd41c2108e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.376 189463 DEBUG nova.compute.manager [req-852ed4cf-a53b-42e7-998c-d5fecfe0c85a req-16ff9672-e076-4701-bc5f-62cd41c2108e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Processing event network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.377 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.383 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695745.3825598, de801101-d42c-462e-98a9-7a2a649cf1d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.383 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.388 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.397 189463 INFO nova.virt.libvirt.driver [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance spawned successfully.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.398 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.406 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.415 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.421 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.422 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.422 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.423 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.423 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.424 189463 DEBUG nova.virt.libvirt.driver [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.454 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.483 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.483 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.484 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.484 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.484 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.486 189463 INFO nova.compute.manager [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Terminating instance#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.487 189463 DEBUG nova.compute.manager [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.502 189463 INFO nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Took 11.07 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.503 189463 DEBUG nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:15:45 compute-0 kernel: tap5f7c429b-02 (unregistering): left promiscuous mode
Dec  2 17:15:45 compute-0 NetworkManager[56503]: <info>  [1764695745.5356] device (tap5f7c429b-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:15:45 compute-0 ovn_controller[97975]: 2025-12-02T17:15:45Z|00138|binding|INFO|Releasing lport 5f7c429b-020f-4314-b208-6820880dcf81 from this chassis (sb_readonly=0)
Dec  2 17:15:45 compute-0 ovn_controller[97975]: 2025-12-02T17:15:45Z|00139|binding|INFO|Setting lport 5f7c429b-020f-4314-b208-6820880dcf81 down in Southbound
Dec  2 17:15:45 compute-0 ovn_controller[97975]: 2025-12-02T17:15:45Z|00140|binding|INFO|Removing iface tap5f7c429b-02 ovn-installed in OVS
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.569 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.572 189463 INFO nova.compute.manager [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Took 13.20 seconds to build instance.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.573 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.576 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:b9 10.100.0.5'], port_security=['fa:16:3e:df:76:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4994ed6b-5e0c-4061-a84c-f46ccf29489f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95abfdbd702a49dc89fc01dd45a4e014', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c8a6a28c-4df2-4758-a58f-e25b3a4dbf0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ba938b6-3c05-41dd-ab92-658c8cac6fe8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=5f7c429b-020f-4314-b208-6820880dcf81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.578 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 5f7c429b-020f-4314-b208-6820880dcf81 in datapath 5882ec1f-b595-4c00-871f-f9ec4c7212bd unbound from our chassis#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.581 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5882ec1f-b595-4c00-871f-f9ec4c7212bd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.583 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7f153771-3779-4308-8413-a391c996c0ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.584 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd namespace which is not needed anymore#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.585 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.593 189463 DEBUG oslo_concurrency.lockutils [None req-3ddad555-c09f-4960-8534-eb914f8eeec6 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:45 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  2 17:15:45 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000007.scope: Consumed 43.484s CPU time.
Dec  2 17:15:45 compute-0 systemd-machined[155878]: Machine qemu-11-instance-00000007 terminated.
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.720 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.753 189463 INFO nova.virt.libvirt.driver [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Instance destroyed successfully.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.754 189463 DEBUG nova.objects.instance [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lazy-loading 'resources' on Instance uuid 4994ed6b-5e0c-4061-a84c-f46ccf29489f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [NOTICE]   (252422) : haproxy version is 2.8.14-c23fe91
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [NOTICE]   (252422) : path to executable is /usr/sbin/haproxy
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [WARNING]  (252422) : Exiting Master process...
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [WARNING]  (252422) : Exiting Master process...
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [ALERT]    (252422) : Current worker (252424) exited with code 143 (Terminated)
Dec  2 17:15:45 compute-0 neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd[252418]: [WARNING]  (252422) : All workers exited. Exiting... (0)
Dec  2 17:15:45 compute-0 systemd[1]: libpod-6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab.scope: Deactivated successfully.
Dec  2 17:15:45 compute-0 podman[253142]: 2025-12-02 17:15:45.772731948 +0000 UTC m=+0.073509495 container died 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.778 189463 DEBUG nova.virt.libvirt.vif [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:13:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-254489110',display_name='tempest-ServerActionsTestJSON-server-254489110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-254489110',id=7,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMyR6bavm+MQZcauyhM005zly03nJhuNCVCQKPs0wvfP+MadqCcadkL/Bt8XjTTL8eXxwcDouWS8ZnjdrrFLuYbkYPXzyqLW1B47ah/PB2GNnHP9UuwTuNdPcLluy6idxQ==',key_name='tempest-keypair-508494976',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:13:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95abfdbd702a49dc89fc01dd45a4e014',ramdisk_id='',reservation_id='r-ekeaadjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897427034',owner_user_name='tempest-ServerActionsTestJSON-897427034-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:14:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c800961435cb4a418a6ee67240a574fe',uuid=4994ed6b-5e0c-4061-a84c-f46ccf29489f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.778 189463 DEBUG nova.network.os_vif_util [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converting VIF {"id": "5f7c429b-020f-4314-b208-6820880dcf81", "address": "fa:16:3e:df:76:b9", "network": {"id": "5882ec1f-b595-4c00-871f-f9ec4c7212bd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-332004562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95abfdbd702a49dc89fc01dd45a4e014", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f7c429b-02", "ovs_interfaceid": "5f7c429b-020f-4314-b208-6820880dcf81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.779 189463 DEBUG nova.network.os_vif_util [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.779 189463 DEBUG os_vif [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.781 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.781 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f7c429b-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.785 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.789 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.792 189463 INFO os_vif [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:b9,bridge_name='br-int',has_traffic_filtering=True,id=5f7c429b-020f-4314-b208-6820880dcf81,network=Network(5882ec1f-b595-4c00-871f-f9ec4c7212bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f7c429b-02')#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.793 189463 INFO nova.virt.libvirt.driver [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Deleting instance files /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f_del#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.794 189463 INFO nova.virt.libvirt.driver [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Deletion of /var/lib/nova/instances/4994ed6b-5e0c-4061-a84c-f46ccf29489f_del complete#033[00m
Dec  2 17:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab-userdata-shm.mount: Deactivated successfully.
Dec  2 17:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af576b7006e13d2e9c814e4cec8e81d2dbc56af3a1fdeeb56d1609011f1ec06-merged.mount: Deactivated successfully.
Dec  2 17:15:45 compute-0 podman[253142]: 2025-12-02 17:15:45.854868379 +0000 UTC m=+0.155645926 container cleanup 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:15:45 compute-0 systemd[1]: libpod-conmon-6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab.scope: Deactivated successfully.
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.951 189463 INFO nova.compute.manager [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.952 189463 DEBUG oslo.service.loopingcall [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.952 189463 DEBUG nova.compute.manager [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.952 189463 DEBUG nova.network.neutron [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:15:45 compute-0 podman[253186]: 2025-12-02 17:15:45.954166294 +0000 UTC m=+0.069037076 container remove 6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.963 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[47015dd3-5a3d-424c-88cb-9ec7f1956169]: (4, ('Tue Dec  2 05:15:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd (6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab)\n6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab\nTue Dec  2 05:15:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd (6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab)\n6e709a57cdd009c72f21c7767ec85473a36d70928bc4d71b181e94e04fb07cab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.979 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f480f73e-1bcc-4bb6-ab1f-826eebf52ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:45 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:45.983 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5882ec1f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:15:45 compute-0 nova_compute[189459]: 2025-12-02 17:15:45.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:46 compute-0 kernel: tap5882ec1f-b0: left promiscuous mode
Dec  2 17:15:46 compute-0 nova_compute[189459]: 2025-12-02 17:15:46.013 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.020 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[dc561eb9-cc17-4ad8-bbfb-70be2d799dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.037 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e9fa2e-0088-45e7-9ceb-734a8e861ce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.039 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c439d5ef-cae3-4136-bdd2-0ea1f2ba9b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.062 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb13143-625c-4df6-b095-521fdc890228]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522778, 'reachable_time': 19636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253200, 'error': None, 'target': 'ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d5882ec1f\x2db595\x2d4c00\x2d871f\x2df9ec4c7212bd.mount: Deactivated successfully.
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.066 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5882ec1f-b595-4c00-871f-f9ec4c7212bd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:15:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:15:46.066 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[9d00c3c5-a71f-4359-b5dc-6e6dcd7dded1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.518 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received event network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.518 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.519 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.519 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.520 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] No waiting events found dispatching network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.520 189463 WARNING nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received unexpected event network-vif-plugged-37210948-7d27-4586-a367-e083ee7fd9e8 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.521 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.521 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.521 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.522 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.522 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.523 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-unplugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.523 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.524 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.524 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.525 189463 DEBUG oslo_concurrency.lockutils [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.525 189463 DEBUG nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] No waiting events found dispatching network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.525 189463 WARNING nova.compute.manager [req-bc82a286-1860-4c1c-b3a0-43abccdf6480 req-f7a32d27-ae7e-4cd4-8519-5a1d249a193e b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received unexpected event network-vif-plugged-5f7c429b-020f-4314-b208-6820880dcf81 for instance with vm_state active and task_state deleting.#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.830 189463 DEBUG nova.network.neutron [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.876 189463 INFO nova.compute.manager [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Took 1.92 seconds to deallocate network for instance.#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.935 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:15:47 compute-0 nova_compute[189459]: 2025-12-02 17:15:47.935 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.082 189463 DEBUG nova.compute.manager [req-e4b0766a-b756-467f-8eee-d06e2e5d8fbe req-e278c5ef-fa5e-43f7-a072-4a6f33a537eb b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Received event network-vif-deleted-5f7c429b-020f-4314-b208-6820880dcf81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.211 189463 DEBUG nova.compute.provider_tree [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.278 189463 DEBUG nova.scheduler.client.report [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.326 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.372 189463 INFO nova.scheduler.client.report [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Deleted allocations for instance 4994ed6b-5e0c-4061-a84c-f46ccf29489f#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.373 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:15:48 compute-0 nova_compute[189459]: 2025-12-02 17:15:48.735 189463 DEBUG oslo_concurrency.lockutils [None req-52798191-bfcb-4cb0-a969-9c97ba70179c c800961435cb4a418a6ee67240a574fe 95abfdbd702a49dc89fc01dd45a4e014 - - default default] Lock "4994ed6b-5e0c-4061-a84c-f46ccf29489f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.721 189463 DEBUG nova.compute.manager [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received event network-changed-37210948-7d27-4586-a367-e083ee7fd9e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.721 189463 DEBUG nova.compute.manager [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Refreshing instance network info cache due to event network-changed-37210948-7d27-4586-a367-e083ee7fd9e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.721 189463 DEBUG oslo_concurrency.lockutils [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.722 189463 DEBUG oslo_concurrency.lockutils [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.722 189463 DEBUG nova.network.neutron [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Refreshing network info cache for port 37210948-7d27-4586-a367-e083ee7fd9e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:15:50 compute-0 nova_compute[189459]: 2025-12-02 17:15:50.785 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:52 compute-0 nova_compute[189459]: 2025-12-02 17:15:52.368 189463 DEBUG nova.network.neutron [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Updated VIF entry in instance network info cache for port 37210948-7d27-4586-a367-e083ee7fd9e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:15:52 compute-0 nova_compute[189459]: 2025-12-02 17:15:52.368 189463 DEBUG nova.network.neutron [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Updating instance_info_cache with network_info: [{"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:15:52 compute-0 nova_compute[189459]: 2025-12-02 17:15:52.392 189463 DEBUG oslo_concurrency.lockutils [req-b12c88c6-ae79-4a40-a69f-9def9e23e700 req-62ee6d3e-ab1a-454f-8815-932ceb7d0571 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-de801101-d42c-462e-98a9-7a2a649cf1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:15:53 compute-0 nova_compute[189459]: 2025-12-02 17:15:53.376 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:53 compute-0 ovn_controller[97975]: 2025-12-02T17:15:53Z|00141|binding|INFO|Releasing lport 089cea48-dae2-41a3-a3af-07863c5f0392 from this chassis (sb_readonly=0)
Dec  2 17:15:53 compute-0 nova_compute[189459]: 2025-12-02 17:15:53.816 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:55 compute-0 podman[253203]: 2025-12-02 17:15:55.346035334 +0000 UTC m=+0.153316984 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec  2 17:15:55 compute-0 nova_compute[189459]: 2025-12-02 17:15:55.788 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:58 compute-0 nova_compute[189459]: 2025-12-02 17:15:58.381 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:15:59 compute-0 podman[203941]: time="2025-12-02T17:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:15:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:15:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  2 17:16:00 compute-0 podman[253223]: 2025-12-02 17:16:00.254109054 +0000 UTC m=+0.089894257 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 17:16:00 compute-0 podman[253224]: 2025-12-02 17:16:00.274227146 +0000 UTC m=+0.098721781 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 17:16:00 compute-0 nova_compute[189459]: 2025-12-02 17:16:00.749 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695745.748164, 4994ed6b-5e0c-4061-a84c-f46ccf29489f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:16:00 compute-0 nova_compute[189459]: 2025-12-02 17:16:00.750 189463 INFO nova.compute.manager [-] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:16:00 compute-0 nova_compute[189459]: 2025-12-02 17:16:00.767 189463 DEBUG nova.compute.manager [None req-b9c1439e-7216-463d-a3da-4c4fa4479816 - - - - - -] [instance: 4994ed6b-5e0c-4061-a84c-f46ccf29489f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:16:00 compute-0 nova_compute[189459]: 2025-12-02 17:16:00.792 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: ERROR   17:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: ERROR   17:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: ERROR   17:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: ERROR   17:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: ERROR   17:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:16:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:16:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:01.887 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:01.888 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:01.889 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:03 compute-0 nova_compute[189459]: 2025-12-02 17:16:03.384 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:04 compute-0 podman[253265]: 2025-12-02 17:16:04.26065029 +0000 UTC m=+0.085618797 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Dec  2 17:16:04 compute-0 podman[253264]: 2025-12-02 17:16:04.261268446 +0000 UTC m=+0.089034347 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:16:04 compute-0 podman[253266]: 2025-12-02 17:16:04.267282065 +0000 UTC m=+0.081946449 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  2 17:16:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:04.489 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:16:04 compute-0 nova_compute[189459]: 2025-12-02 17:16:04.490 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:04.492 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:16:05 compute-0 nova_compute[189459]: 2025-12-02 17:16:05.795 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:08 compute-0 nova_compute[189459]: 2025-12-02 17:16:08.388 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:10 compute-0 nova_compute[189459]: 2025-12-02 17:16:10.798 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:13 compute-0 podman[253318]: 2025-12-02 17:16:13.246561923 +0000 UTC m=+0.069294515 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:16:13 compute-0 podman[253319]: 2025-12-02 17:16:13.273429094 +0000 UTC m=+0.092337474 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:16:13 compute-0 podman[253317]: 2025-12-02 17:16:13.316097073 +0000 UTC m=+0.143640212 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:16:13 compute-0 nova_compute[189459]: 2025-12-02 17:16:13.390 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:13 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:13.496 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:15 compute-0 nova_compute[189459]: 2025-12-02 17:16:15.803 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:18 compute-0 nova_compute[189459]: 2025-12-02 17:16:18.393 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:20 compute-0 ovn_controller[97975]: 2025-12-02T17:16:20Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:a2:b5 10.100.0.12
Dec  2 17:16:20 compute-0 ovn_controller[97975]: 2025-12-02T17:16:20Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:a2:b5 10.100.0.12
Dec  2 17:16:20 compute-0 nova_compute[189459]: 2025-12-02 17:16:20.809 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:23 compute-0 nova_compute[189459]: 2025-12-02 17:16:23.398 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:25 compute-0 nova_compute[189459]: 2025-12-02 17:16:25.814 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:26 compute-0 podman[253414]: 2025-12-02 17:16:26.252118874 +0000 UTC m=+0.071427641 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec  2 17:16:27 compute-0 nova_compute[189459]: 2025-12-02 17:16:27.952 189463 INFO nova.compute.manager [None req-12063d20-0171-415a-ac31-2e8fee0443e2 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Get console output#033[00m
Dec  2 17:16:27 compute-0 nova_compute[189459]: 2025-12-02 17:16:27.960 239820 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.300 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.300 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.301 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.301 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.302 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.303 189463 INFO nova.compute.manager [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Terminating instance#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.304 189463 DEBUG nova.compute.manager [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:16:28 compute-0 kernel: tap37210948-7d (unregistering): left promiscuous mode
Dec  2 17:16:28 compute-0 NetworkManager[56503]: <info>  [1764695788.3519] device (tap37210948-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.360 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 ovn_controller[97975]: 2025-12-02T17:16:28Z|00142|binding|INFO|Releasing lport 37210948-7d27-4586-a367-e083ee7fd9e8 from this chassis (sb_readonly=0)
Dec  2 17:16:28 compute-0 ovn_controller[97975]: 2025-12-02T17:16:28Z|00143|binding|INFO|Setting lport 37210948-7d27-4586-a367-e083ee7fd9e8 down in Southbound
Dec  2 17:16:28 compute-0 ovn_controller[97975]: 2025-12-02T17:16:28Z|00144|binding|INFO|Removing iface tap37210948-7d ovn-installed in OVS
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.365 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.373 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a2:b5 10.100.0.12'], port_security=['fa:16:3e:38:a2:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'de801101-d42c-462e-98a9-7a2a649cf1d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2591d563-0f24-454c-a7d6-5a800a4529e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'afeaf459-6d05-4cbb-9286-ae9b8108a158', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa973d1f-4349-4977-a256-bb28e0fe00db, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=37210948-7d27-4586-a367-e083ee7fd9e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.375 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 37210948-7d27-4586-a367-e083ee7fd9e8 in datapath 2591d563-0f24-454c-a7d6-5a800a4529e5 unbound from our chassis#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.375 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.377 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2591d563-0f24-454c-a7d6-5a800a4529e5#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.398 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d0da3093-4d88-4b42-8370-eab16da2d3db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.401 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  2 17:16:28 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 38.371s CPU time.
Dec  2 17:16:28 compute-0 systemd-machined[155878]: Machine qemu-13-instance-0000000c terminated.
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.430 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b58ef2-bfb1-4156-937f-ede1a07ce1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.434 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[b1de47e9-2d4a-4f8e-975b-1b08888bd5c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.461 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[14477564-f0a3-41a4-8efb-7888af3f6e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.480 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed822f1-3b21-4140-82a8-91010b44f753]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2591d563-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:3d:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522900, 'reachable_time': 29759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253446, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.506 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c461456e-d6c1-406e-b702-b3023e9f6f0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2591d563-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522914, 'tstamp': 522914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253447, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2591d563-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522917, 'tstamp': 522917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253447, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.508 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2591d563-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.510 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.518 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.519 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2591d563-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.520 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.521 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2591d563-00, col_values=(('external_ids', {'iface-id': '089cea48-dae2-41a3-a3af-07863c5f0392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:28.521 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:16:28 compute-0 kernel: tap37210948-7d: entered promiscuous mode
Dec  2 17:16:28 compute-0 kernel: tap37210948-7d (unregistering): left promiscuous mode
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.558 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.620 189463 INFO nova.virt.libvirt.driver [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Instance destroyed successfully.#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.622 189463 DEBUG nova.objects.instance [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'resources' on Instance uuid de801101-d42c-462e-98a9-7a2a649cf1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.649 189463 DEBUG nova.virt.libvirt.vif [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2145186315',display_name='tempest-TestNetworkBasicOps-server-2145186315',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2145186315',id=12,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuF1sAxqZQNz6u2VBqs9bVrNgdUFgdo8O7ErgmUtDS3XZRL6KPJpe8onhiHO7jhCaFNOGyPQ2NEeB8mwAPfaSSmTRNt7jy5j4A0Ns1cXKzVudtK+FyFTsPC/FXtJL7SKg==',key_name='tempest-TestNetworkBasicOps-213862819',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:15:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zoql0s7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:15:45Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=de801101-d42c-462e-98a9-7a2a649cf1d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.650 189463 DEBUG nova.network.os_vif_util [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "37210948-7d27-4586-a367-e083ee7fd9e8", "address": "fa:16:3e:38:a2:b5", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37210948-7d", "ovs_interfaceid": "37210948-7d27-4586-a367-e083ee7fd9e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.651 189463 DEBUG nova.network.os_vif_util [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.652 189463 DEBUG os_vif [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.653 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.654 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37210948-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.656 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.660 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.665 189463 INFO os_vif [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a2:b5,bridge_name='br-int',has_traffic_filtering=True,id=37210948-7d27-4586-a367-e083ee7fd9e8,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37210948-7d')#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.666 189463 INFO nova.virt.libvirt.driver [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Deleting instance files /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3_del#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.666 189463 INFO nova.virt.libvirt.driver [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Deletion of /var/lib/nova/instances/de801101-d42c-462e-98a9-7a2a649cf1d3_del complete#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.721 189463 INFO nova.compute.manager [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.722 189463 DEBUG oslo.service.loopingcall [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.722 189463 DEBUG nova.compute.manager [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:16:28 compute-0 nova_compute[189459]: 2025-12-02 17:16:28.723 189463 DEBUG nova.network.neutron [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.494 189463 DEBUG nova.network.neutron [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.512 189463 INFO nova.compute.manager [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Took 0.79 seconds to deallocate network for instance.#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.550 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.565 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.566 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.600 189463 DEBUG nova.compute.manager [req-bf07f01c-c858-4b07-9ef9-1b9e82d860ea req-fdaa5b2d-d776-48aa-aeac-9d50cba24255 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Received event network-vif-deleted-37210948-7d27-4586-a367-e083ee7fd9e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.658 189463 DEBUG nova.compute.provider_tree [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.670 189463 DEBUG nova.scheduler.client.report [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.687 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.706 189463 INFO nova.scheduler.client.report [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Deleted allocations for instance de801101-d42c-462e-98a9-7a2a649cf1d3#033[00m
Dec  2 17:16:29 compute-0 podman[203941]: time="2025-12-02T17:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:16:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:16:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  2 17:16:29 compute-0 nova_compute[189459]: 2025-12-02 17:16:29.769 189463 DEBUG oslo_concurrency.lockutils [None req-d7642185-9a01-44de-bc85-808ca2134683 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "de801101-d42c-462e-98a9-7a2a649cf1d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:31 compute-0 podman[253464]: 2025-12-02 17:16:31.260645667 +0000 UTC m=+0.079545616 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  2 17:16:31 compute-0 podman[253465]: 2025-12-02 17:16:31.279527636 +0000 UTC m=+0.093579987 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: ERROR   17:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: ERROR   17:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: ERROR   17:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: ERROR   17:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: ERROR   17:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:16:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.423 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.425 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.425 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.855 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.855 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.856 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:16:31 compute-0 nova_compute[189459]: 2025-12-02 17:16:31.856 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid c42974d1-ca42-4b24-bf99-14f43ee59916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.175 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.176 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.176 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.176 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.176 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.177 189463 INFO nova.compute.manager [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Terminating instance#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.178 189463 DEBUG nova.compute.manager [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:16:33 compute-0 kernel: tap84301772-f4 (unregistering): left promiscuous mode
Dec  2 17:16:33 compute-0 NetworkManager[56503]: <info>  [1764695793.3048] device (tap84301772-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:16:33 compute-0 ovn_controller[97975]: 2025-12-02T17:16:33Z|00145|binding|INFO|Releasing lport 84301772-f4d5-42b6-bb8d-a3217c3c9135 from this chassis (sb_readonly=0)
Dec  2 17:16:33 compute-0 ovn_controller[97975]: 2025-12-02T17:16:33Z|00146|binding|INFO|Setting lport 84301772-f4d5-42b6-bb8d-a3217c3c9135 down in Southbound
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.313 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 ovn_controller[97975]: 2025-12-02T17:16:33Z|00147|binding|INFO|Removing iface tap84301772-f4 ovn-installed in OVS
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.317 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.334 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d2:17 10.100.0.13'], port_security=['fa:16:3e:a9:d2:17 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c42974d1-ca42-4b24-bf99-14f43ee59916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2591d563-0f24-454c-a7d6-5a800a4529e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5fdb2e066254ddbbd43316d1a1a75b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6755368d-a77d-4335-bd69-0f08f2712850', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa973d1f-4349-4977-a256-bb28e0fe00db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=84301772-f4d5-42b6-bb8d-a3217c3c9135) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.336 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.337 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 84301772-f4d5-42b6-bb8d-a3217c3c9135 in datapath 2591d563-0f24-454c-a7d6-5a800a4529e5 unbound from our chassis#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.340 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2591d563-0f24-454c-a7d6-5a800a4529e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.341 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[9237cba5-04cb-42bc-b048-f96d5393b71a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.342 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5 namespace which is not needed anymore#033[00m
Dec  2 17:16:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  2 17:16:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 47.706s CPU time.
Dec  2 17:16:33 compute-0 systemd-machined[155878]: Machine qemu-12-instance-0000000b terminated.
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.420 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.453 189463 INFO nova.virt.libvirt.driver [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Instance destroyed successfully.#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.453 189463 DEBUG nova.objects.instance [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lazy-loading 'resources' on Instance uuid c42974d1-ca42-4b24-bf99-14f43ee59916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.474 189463 DEBUG nova.virt.libvirt.vif [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:14:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1141860031',display_name='tempest-TestNetworkBasicOps-server-1141860031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1141860031',id=11,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3xMbBwXOKYpHyDLgj1no2pesb80gxFUUAcWy/VgL8qtPGbYtR1FfC4raPyHH8Nhv/kDJBDHp89xaBomyA3RfYH/tyxB0ptma7jHSFm26Tytf6R1iZbF1KGp8fwu9OdFQ==',key_name='tempest-TestNetworkBasicOps-292184376',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:14:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5fdb2e066254ddbbd43316d1a1a75b2',ramdisk_id='',reservation_id='r-zfgnxf9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-592268676',owner_user_name='tempest-TestNetworkBasicOps-592268676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:14:41Z,user_data=None,user_id='ed4b2c7904414b1cb5c9314cf52d7eff',uuid=c42974d1-ca42-4b24-bf99-14f43ee59916,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.474 189463 DEBUG nova.network.os_vif_util [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converting VIF {"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.475 189463 DEBUG nova.network.os_vif_util [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.475 189463 DEBUG os_vif [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.476 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.476 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84301772-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.478 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.480 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.481 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.483 189463 INFO os_vif [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:d2:17,bridge_name='br-int',has_traffic_filtering=True,id=84301772-f4d5-42b6-bb8d-a3217c3c9135,network=Network(2591d563-0f24-454c-a7d6-5a800a4529e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84301772-f4')#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.484 189463 INFO nova.virt.libvirt.driver [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Deleting instance files /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916_del#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.485 189463 INFO nova.virt.libvirt.driver [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Deletion of /var/lib/nova/instances/c42974d1-ca42-4b24-bf99-14f43ee59916_del complete#033[00m
Dec  2 17:16:33 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [NOTICE]   (252565) : haproxy version is 2.8.14-c23fe91
Dec  2 17:16:33 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [NOTICE]   (252565) : path to executable is /usr/sbin/haproxy
Dec  2 17:16:33 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [WARNING]  (252565) : Exiting Master process...
Dec  2 17:16:33 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [ALERT]    (252565) : Current worker (252567) exited with code 143 (Terminated)
Dec  2 17:16:33 compute-0 neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5[252561]: [WARNING]  (252565) : All workers exited. Exiting... (0)
Dec  2 17:16:33 compute-0 systemd[1]: libpod-d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9.scope: Deactivated successfully.
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.543 189463 INFO nova.compute.manager [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.543 189463 DEBUG oslo.service.loopingcall [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.544 189463 DEBUG nova.compute.manager [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.544 189463 DEBUG nova.network.neutron [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:16:33 compute-0 conmon[252561]: conmon d7e5bae7da2a9e821cf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9.scope/container/memory.events
Dec  2 17:16:33 compute-0 podman[253541]: 2025-12-02 17:16:33.548483352 +0000 UTC m=+0.067844916 container died d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 17:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9-userdata-shm.mount: Deactivated successfully.
Dec  2 17:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab051721dd85874a67e6f450ced4a8c9198ec212e42cfccf523f40a811f47f0c-merged.mount: Deactivated successfully.
Dec  2 17:16:33 compute-0 podman[253541]: 2025-12-02 17:16:33.711007562 +0000 UTC m=+0.230369146 container cleanup d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 17:16:33 compute-0 systemd[1]: libpod-conmon-d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9.scope: Deactivated successfully.
Dec  2 17:16:33 compute-0 podman[253568]: 2025-12-02 17:16:33.832309361 +0000 UTC m=+0.090971718 container remove d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.841 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[80f7dfce-e607-43fc-8108-db0f3a5910b7]: (4, ('Tue Dec  2 05:16:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5 (d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9)\nd7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9\nTue Dec  2 05:16:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5 (d7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9)\nd7e5bae7da2a9e821cf54ac9c6079d7414effa88a7f300051107f21464c46ba9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.843 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce2e434-7ce2-4f9a-a46f-ae0a116360da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.845 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2591d563-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.847 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 kernel: tap2591d563-00: left promiscuous mode
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.873 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.876 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec85938-8dad-44a7-81db-7063d0e643ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.889 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2818cd90-fa32-48d3-8c43-a875c3e9ea78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.891 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc138f3-ed0d-4f95-a3f5-1442fb0c9fbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.905 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[709242d6-23e0-4783-8ca0-0dfa73c674e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522891, 'reachable_time': 19307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253585, 'error': None, 'target': 'ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.908 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2591d563-0f24-454c-a7d6-5a800a4529e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:16:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:16:33.908 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[99dfa5da-3183-403b-86f8-60b2cf3c7692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:16:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d2591d563\x2d0f24\x2d454c\x2da7d6\x2d5a800a4529e5.mount: Deactivated successfully.
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.970 189463 DEBUG nova.compute.manager [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-unplugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.971 189463 DEBUG oslo_concurrency.lockutils [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.971 189463 DEBUG oslo_concurrency.lockutils [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.972 189463 DEBUG oslo_concurrency.lockutils [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.972 189463 DEBUG nova.compute.manager [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] No waiting events found dispatching network-vif-unplugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:16:33 compute-0 nova_compute[189459]: 2025-12-02 17:16:33.972 189463 DEBUG nova.compute.manager [req-7e3e12cc-67c8-4797-a86d-7005b01bdbd0 req-b4735dd7-12f8-41b0-928d-ab6820c67b95 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-unplugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.568 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [{"id": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "address": "fa:16:3e:a9:d2:17", "network": {"id": "2591d563-0f24-454c-a7d6-5a800a4529e5", "bridge": "br-int", "label": "tempest-network-smoke--1256485445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5fdb2e066254ddbbd43316d1a1a75b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84301772-f4", "ovs_interfaceid": "84301772-f4d5-42b6-bb8d-a3217c3c9135", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.606 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-c42974d1-ca42-4b24-bf99-14f43ee59916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.607 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.607 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.608 189463 DEBUG nova.network.neutron [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.634 189463 INFO nova.compute.manager [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Took 1.09 seconds to deallocate network for instance.#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.695 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.695 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.746 189463 DEBUG nova.compute.provider_tree [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.759 189463 DEBUG nova.scheduler.client.report [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.792 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:34 compute-0 nova_compute[189459]: 2025-12-02 17:16:34.819 189463 INFO nova.scheduler.client.report [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Deleted allocations for instance c42974d1-ca42-4b24-bf99-14f43ee59916#033[00m
Dec  2 17:16:35 compute-0 nova_compute[189459]: 2025-12-02 17:16:35.015 189463 DEBUG oslo_concurrency.lockutils [None req-b34a32ca-f8d9-46b4-8acf-f15477131ad8 ed4b2c7904414b1cb5c9314cf52d7eff b5fdb2e066254ddbbd43316d1a1a75b2 - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:35 compute-0 podman[253586]: 2025-12-02 17:16:35.255765575 +0000 UTC m=+0.088055711 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:16:35 compute-0 podman[253587]: 2025-12-02 17:16:35.27597823 +0000 UTC m=+0.104465725 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.buildah.version=1.29.0, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  2 17:16:35 compute-0 podman[253588]: 2025-12-02 17:16:35.282617746 +0000 UTC m=+0.105896224 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.090 189463 DEBUG nova.compute.manager [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.090 189463 DEBUG oslo_concurrency.lockutils [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.090 189463 DEBUG oslo_concurrency.lockutils [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.090 189463 DEBUG oslo_concurrency.lockutils [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "c42974d1-ca42-4b24-bf99-14f43ee59916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.091 189463 DEBUG nova.compute.manager [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] No waiting events found dispatching network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.091 189463 WARNING nova.compute.manager [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received unexpected event network-vif-plugged-84301772-f4d5-42b6-bb8d-a3217c3c9135 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.091 189463 DEBUG nova.compute.manager [req-ccfdaad2-12c6-4737-a960-5102ddd69067 req-29c1f40a-640f-4caa-96f1-39574764b103 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Received event network-vif-deleted-84301772-f4d5-42b6-bb8d-a3217c3c9135 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:16:36 compute-0 nova_compute[189459]: 2025-12-02 17:16:36.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.436 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.869 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.870 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5341MB free_disk=72.16052627563477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.871 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.871 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.936 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.937 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.961 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:16:37 compute-0 nova_compute[189459]: 2025-12-02 17:16:37.982 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.003 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.004 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.417 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.479 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.518 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:38 compute-0 nova_compute[189459]: 2025-12-02 17:16:38.530 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:40 compute-0 nova_compute[189459]: 2025-12-02 17:16:40.005 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:41 compute-0 nova_compute[189459]: 2025-12-02 17:16:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:41 compute-0 nova_compute[189459]: 2025-12-02 17:16:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:16:41 compute-0 nova_compute[189459]: 2025-12-02 17:16:41.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:16:43 compute-0 nova_compute[189459]: 2025-12-02 17:16:43.482 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:43 compute-0 nova_compute[189459]: 2025-12-02 17:16:43.522 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:43 compute-0 nova_compute[189459]: 2025-12-02 17:16:43.617 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695788.614525, de801101-d42c-462e-98a9-7a2a649cf1d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:16:43 compute-0 nova_compute[189459]: 2025-12-02 17:16:43.618 189463 INFO nova.compute.manager [-] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:16:43 compute-0 nova_compute[189459]: 2025-12-02 17:16:43.647 189463 DEBUG nova.compute.manager [None req-f6a39a93-8e05-4260-82bb-bd541b94d101 - - - - - -] [instance: de801101-d42c-462e-98a9-7a2a649cf1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:16:44 compute-0 podman[253643]: 2025-12-02 17:16:44.247522341 +0000 UTC m=+0.068217586 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:16:44 compute-0 podman[253642]: 2025-12-02 17:16:44.262265822 +0000 UTC m=+0.084481327 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:16:44 compute-0 podman[253641]: 2025-12-02 17:16:44.317678038 +0000 UTC m=+0.148338206 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  2 17:16:48 compute-0 nova_compute[189459]: 2025-12-02 17:16:48.445 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695793.4444299, c42974d1-ca42-4b24-bf99-14f43ee59916 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:16:48 compute-0 nova_compute[189459]: 2025-12-02 17:16:48.447 189463 INFO nova.compute.manager [-] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:16:48 compute-0 nova_compute[189459]: 2025-12-02 17:16:48.467 189463 DEBUG nova.compute.manager [None req-02f26b12-0193-4801-84ec-b8eb77f42def - - - - - -] [instance: c42974d1-ca42-4b24-bf99-14f43ee59916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:16:48 compute-0 nova_compute[189459]: 2025-12-02 17:16:48.486 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:48 compute-0 nova_compute[189459]: 2025-12-02 17:16:48.526 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:53 compute-0 nova_compute[189459]: 2025-12-02 17:16:53.490 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:53 compute-0 nova_compute[189459]: 2025-12-02 17:16:53.529 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:57 compute-0 podman[253711]: 2025-12-02 17:16:57.314744925 +0000 UTC m=+0.137348026 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, architecture=x86_64)
Dec  2 17:16:58 compute-0 nova_compute[189459]: 2025-12-02 17:16:58.495 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:58 compute-0 nova_compute[189459]: 2025-12-02 17:16:58.532 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:16:59 compute-0 podman[203941]: time="2025-12-02T17:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:16:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:16:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4319 "" "Go-http-client/1.1"
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: ERROR   17:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: ERROR   17:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: ERROR   17:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: ERROR   17:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: ERROR   17:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:17:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:17:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:01.888 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:01.888 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:01.889 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:02 compute-0 podman[253734]: 2025-12-02 17:17:02.258805731 +0000 UTC m=+0.081510878 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:17:02 compute-0 podman[253733]: 2025-12-02 17:17:02.267347117 +0000 UTC m=+0.093533406 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.055 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.057 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:17:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:17:03 compute-0 nova_compute[189459]: 2025-12-02 17:17:03.499 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:03 compute-0 nova_compute[189459]: 2025-12-02 17:17:03.535 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:04.651 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:17:04 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:04.652 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:17:04 compute-0 nova_compute[189459]: 2025-12-02 17:17:04.653 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:06 compute-0 podman[253775]: 2025-12-02 17:17:06.316509087 +0000 UTC m=+0.126300584 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 17:17:06 compute-0 podman[253773]: 2025-12-02 17:17:06.319934867 +0000 UTC m=+0.143994901 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:17:06 compute-0 podman[253774]: 2025-12-02 17:17:06.323237815 +0000 UTC m=+0.145911293 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64)
Dec  2 17:17:07 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:07.656 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:08 compute-0 nova_compute[189459]: 2025-12-02 17:17:08.502 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:08 compute-0 nova_compute[189459]: 2025-12-02 17:17:08.537 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:13 compute-0 nova_compute[189459]: 2025-12-02 17:17:13.506 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:13 compute-0 nova_compute[189459]: 2025-12-02 17:17:13.540 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:14 compute-0 podman[253825]: 2025-12-02 17:17:14.786343344 +0000 UTC m=+0.092024946 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:17:14 compute-0 podman[253826]: 2025-12-02 17:17:14.787109355 +0000 UTC m=+0.083261225 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:17:14 compute-0 podman[253824]: 2025-12-02 17:17:14.847630636 +0000 UTC m=+0.145286736 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.432 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.433 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.451 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.511 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.544 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.739 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.740 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.751 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.752 189463 INFO nova.compute.claims [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.868 189463 DEBUG nova.compute.provider_tree [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.881 189463 DEBUG nova.scheduler.client.report [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.900 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.901 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.942 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.943 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:17:18 compute-0 nova_compute[189459]: 2025-12-02 17:17:18.981 189463 INFO nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.008 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.138 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.140 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.141 189463 INFO nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Creating image(s)#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.142 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.143 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.144 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.144 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:19 compute-0 nova_compute[189459]: 2025-12-02 17:17:19.145 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.210 189463 DEBUG nova.policy [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd97265454999468fb261510e60c81b0e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:17:20 compute-0 ovn_controller[97975]: 2025-12-02T17:17:20Z|00148|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.895 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.994 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.part --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.996 189463 DEBUG nova.virt.images [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] 53890fe7-10ca-4d2d-8959-827e6ad0a9a2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.997 189463 DEBUG nova.privsep.utils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 17:17:20 compute-0 nova_compute[189459]: 2025-12-02 17:17:20.998 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.part /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.299 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.part /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.converted" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.306 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.381 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.382 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.397 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.469 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.471 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.472 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.494 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.566 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.567 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba,backing_fmt=raw /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.606 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba,backing_fmt=raw /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.607 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.608 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.672 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.673 189463 DEBUG nova.virt.disk.api [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Checking if we can resize image /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.673 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.733 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.735 189463 DEBUG nova.virt.disk.api [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Cannot resize image /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.735 189463 DEBUG nova.objects.instance [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'migration_context' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.749 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.749 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Ensure instance console log exists: /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.750 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.750 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.751 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:21 compute-0 nova_compute[189459]: 2025-12-02 17:17:21.769 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Successfully created port: 68e04713-a4f3-481c-ba86-5b87fe8b2358 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.222 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Successfully updated port: 68e04713-a4f3-481c-ba86-5b87fe8b2358 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.272 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.273 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.274 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.416 189463 DEBUG nova.compute.manager [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-changed-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.417 189463 DEBUG nova.compute.manager [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Refreshing instance network info cache due to event network-changed-68e04713-a4f3-481c-ba86-5b87fe8b2358. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.418 189463 DEBUG oslo_concurrency.lockutils [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.514 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.546 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.560 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.755 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.757 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.778 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.863 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.864 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.875 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:17:23 compute-0 nova_compute[189459]: 2025-12-02 17:17:23.876 189463 INFO nova.compute.claims [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.028 189463 DEBUG nova.compute.provider_tree [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.047 189463 DEBUG nova.scheduler.client.report [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.074 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.075 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.122 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.123 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.149 189463 INFO nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.176 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.278 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.280 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.281 189463 INFO nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Creating image(s)#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.282 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.282 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.284 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.311 189463 DEBUG nova.network.neutron [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.316 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.349 189463 DEBUG nova.policy [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '262f5a0a8792434786ece0f667375e02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c363afe1862442af83665e6092410d18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.357 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.358 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Instance network_info: |[{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.360 189463 DEBUG oslo_concurrency.lockutils [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.361 189463 DEBUG nova.network.neutron [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Refreshing network info cache for port 68e04713-a4f3-481c-ba86-5b87fe8b2358 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.367 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Start _get_guest_xml network_info=[{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:17:05Z,direct_url=<?>,disk_format='qcow2',id=53890fe7-10ca-4d2d-8959-827e6ad0a9a2,min_disk=0,min_ram=0,name='tempest-scenario-img--1502674318',owner='d97265454999468fb261510e60c81b0e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:17:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.380 189463 WARNING nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.393 189463 DEBUG nova.virt.libvirt.host [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.394 189463 DEBUG nova.virt.libvirt.host [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.401 189463 DEBUG nova.virt.libvirt.host [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.401 189463 DEBUG nova.virt.libvirt.host [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.402 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.402 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:17:05Z,direct_url=<?>,disk_format='qcow2',id=53890fe7-10ca-4d2d-8959-827e6ad0a9a2,min_disk=0,min_ram=0,name='tempest-scenario-img--1502674318',owner='d97265454999468fb261510e60c81b0e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:17:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.403 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.403 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.403 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.404 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.404 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.404 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.404 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.405 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.405 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.405 189463 DEBUG nova.virt.hardware [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.410 189463 DEBUG nova.virt.libvirt.vif [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',id=13,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-l33qblw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:17:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=3a077761-3f4d-47af-aea2-9c3255ed7868,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.410 189463 DEBUG nova.network.os_vif_util [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.411 189463 DEBUG nova.network.os_vif_util [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.412 189463 DEBUG nova.objects.instance [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.419 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.420 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.421 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.436 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.493 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <uuid>3a077761-3f4d-47af-aea2-9c3255ed7868</uuid>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <name>instance-0000000d</name>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:name>te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3</nova:name>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:17:24</nova:creationTime>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:user uuid="5673ab6de24147cb96ea139c0ad6cb0e">tempest-PrometheusGabbiTest-603644689-project-member</nova:user>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:project uuid="d97265454999468fb261510e60c81b0e">tempest-PrometheusGabbiTest-603644689</nova:project>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="53890fe7-10ca-4d2d-8959-827e6ad0a9a2"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        <nova:port uuid="68e04713-a4f3-481c-ba86-5b87fe8b2358">
Dec  2 17:17:24 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.3.185" ipVersion="4"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <system>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="serial">3a077761-3f4d-47af-aea2-9c3255ed7868</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="uuid">3a077761-3f4d-47af-aea2-9c3255ed7868</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </system>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <os>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </os>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <features>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </features>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.config"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:66:75:a2"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <target dev="tap68e04713-a4"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/console.log" append="off"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <video>
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </video>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:17:24 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:17:24 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:17:24 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:17:24 compute-0 nova_compute[189459]: </domain>
Dec  2 17:17:24 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.495 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Preparing to wait for external event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.496 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.496 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.497 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.499 189463 DEBUG nova.virt.libvirt.vif [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',id=13,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-l33qblw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:17:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=3a077761-3f4d-47af-aea2-9c3255ed7868,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.499 189463 DEBUG nova.network.os_vif_util [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.501 189463 DEBUG nova.network.os_vif_util [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.502 189463 DEBUG os_vif [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.503 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.505 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.505 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.511 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.511 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68e04713-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.512 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68e04713-a4, col_values=(('external_ids', {'iface-id': '68e04713-a4f3-481c-ba86-5b87fe8b2358', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:75:a2', 'vm-uuid': '3a077761-3f4d-47af-aea2-9c3255ed7868'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:24 compute-0 NetworkManager[56503]: <info>  [1764695844.5168] manager: (tap68e04713-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.515 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.522 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.527 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.529 189463 INFO os_vif [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4')#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.532 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.533 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.588 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7,backing_fmt=raw /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.589 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "32bc5c5b2a17e06e78561597f1b90498e3f742b7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.590 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.663 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.664 189463 DEBUG nova.virt.disk.api [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Checking if we can resize image /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.664 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.759 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.760 189463 DEBUG nova.virt.disk.api [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Cannot resize image /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.761 189463 DEBUG nova.objects.instance [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lazy-loading 'migration_context' on Instance uuid d20422e7-48ac-4100-9ae0-4322baab5766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.799 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.800 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Ensure instance console log exists: /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.801 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.801 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.801 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.806 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.807 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.807 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No VIF found with MAC fa:16:3e:66:75:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:17:24 compute-0 nova_compute[189459]: 2025-12-02 17:17:24.808 189463 INFO nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Using config drive#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.417 189463 INFO nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Creating config drive at /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.config#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.423 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9whv00q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.563 189463 DEBUG oslo_concurrency.processutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9whv00q" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:25 compute-0 kernel: tap68e04713-a4: entered promiscuous mode
Dec  2 17:17:25 compute-0 NetworkManager[56503]: <info>  [1764695845.6326] manager: (tap68e04713-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec  2 17:17:25 compute-0 ovn_controller[97975]: 2025-12-02T17:17:25Z|00149|binding|INFO|Claiming lport 68e04713-a4f3-481c-ba86-5b87fe8b2358 for this chassis.
Dec  2 17:17:25 compute-0 ovn_controller[97975]: 2025-12-02T17:17:25Z|00150|binding|INFO|68e04713-a4f3-481c-ba86-5b87fe8b2358: Claiming fa:16:3e:66:75:a2 10.100.3.185
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.634 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.645 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:75:a2 10.100.3.185'], port_security=['fa:16:3e:66:75:a2 10.100.3.185'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.185/16', 'neutron:device_id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd97265454999468fb261510e60c81b0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf5ac8bc-8bfc-4f8e-a133-81a949c4ce5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de4d374-0f93-45af-a6f2-2a5ac9c09a1c, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=68e04713-a4f3-481c-ba86-5b87fe8b2358) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.647 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 68e04713-a4f3-481c-ba86-5b87fe8b2358 in datapath 82b562d0-fe3d-43c8-b78e-fc2eee29ef70 bound to our chassis#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.649 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82b562d0-fe3d-43c8-b78e-fc2eee29ef70#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.664 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[fa908632-3d76-4249-89bd-426e3a3e25d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.665 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82b562d0-f1 in ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.673 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82b562d0-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.673 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[76e7bc1f-5242-4b58-b71e-88626f5d6d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.674 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ede16ca0-e9f9-4365-bf08-f0fd6b145465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 systemd-udevd[253956]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.688 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[966d5c06-d278-421b-9244-16318ea6a1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 NetworkManager[56503]: <info>  [1764695845.7039] device (tap68e04713-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:17:25 compute-0 NetworkManager[56503]: <info>  [1764695845.7050] device (tap68e04713-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.703 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:25 compute-0 ovn_controller[97975]: 2025-12-02T17:17:25Z|00151|binding|INFO|Setting lport 68e04713-a4f3-481c-ba86-5b87fe8b2358 ovn-installed in OVS
Dec  2 17:17:25 compute-0 ovn_controller[97975]: 2025-12-02T17:17:25Z|00152|binding|INFO|Setting lport 68e04713-a4f3-481c-ba86-5b87fe8b2358 up in Southbound
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.712 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.716 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9dcfec-e4cf-4fa1-aab4-b5f0b9b6d119]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 systemd-machined[155878]: New machine qemu-14-instance-0000000d.
Dec  2 17:17:25 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.753 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bd2ba1-a811-4d29-9026-89b5ae5bf642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.760 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[47cb2580-0372-4127-94b0-737370199360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 NetworkManager[56503]: <info>  [1764695845.7615] manager: (tap82b562d0-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.798 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Successfully created port: b08a0c1b-2269-4358-92a6-a6be384d5bf6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.802 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae8f7f1-1ad4-40ff-8c77-8b2a6dda38a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.806 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4e9090-4c82-496d-a6ac-cc92b8c42dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 NetworkManager[56503]: <info>  [1764695845.8341] device (tap82b562d0-f0): carrier: link connected
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.840 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[fd69ba40-8672-411b-9d05-44e41c2a2744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.858 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a6b75a-b2a3-4ca7-8ce6-ad292b54b860]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82b562d0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539436, 'reachable_time': 33399, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253990, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.881 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9a1806-473f-4bcb-b025-708278a84cf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539436, 'tstamp': 539436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253991, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.907 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[17c6a70d-b405-4583-9440-fcb7e69e49cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82b562d0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539436, 'reachable_time': 33399, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253992, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:25.951 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[a5342f57-1a40-4ffb-975f-8814e272c34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.993 189463 DEBUG nova.compute.manager [req-8ed9d93d-2ab1-4562-b953-917fc0bf4f6d req-5dc18bda-d3f0-4c7f-a86d-cc078bf8b64c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.994 189463 DEBUG oslo_concurrency.lockutils [req-8ed9d93d-2ab1-4562-b953-917fc0bf4f6d req-5dc18bda-d3f0-4c7f-a86d-cc078bf8b64c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.994 189463 DEBUG oslo_concurrency.lockutils [req-8ed9d93d-2ab1-4562-b953-917fc0bf4f6d req-5dc18bda-d3f0-4c7f-a86d-cc078bf8b64c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.994 189463 DEBUG oslo_concurrency.lockutils [req-8ed9d93d-2ab1-4562-b953-917fc0bf4f6d req-5dc18bda-d3f0-4c7f-a86d-cc078bf8b64c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:25 compute-0 nova_compute[189459]: 2025-12-02 17:17:25.995 189463 DEBUG nova.compute.manager [req-8ed9d93d-2ab1-4562-b953-917fc0bf4f6d req-5dc18bda-d3f0-4c7f-a86d-cc078bf8b64c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Processing event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.039 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f98499d9-102c-48a4-a57a-dbb3cd6cd080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.040 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82b562d0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.041 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.041 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82b562d0-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.043 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:26 compute-0 NetworkManager[56503]: <info>  [1764695846.0449] manager: (tap82b562d0-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec  2 17:17:26 compute-0 kernel: tap82b562d0-f0: entered promiscuous mode
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.046 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.050 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82b562d0-f0, col_values=(('external_ids', {'iface-id': '3390bd6d-860e-4bcb-929b-c08f611343b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.052 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:26 compute-0 ovn_controller[97975]: 2025-12-02T17:17:26Z|00153|binding|INFO|Releasing lport 3390bd6d-860e-4bcb-929b-c08f611343b9 from this chassis (sb_readonly=0)
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.054 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.054 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82b562d0-fe3d-43c8-b78e-fc2eee29ef70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82b562d0-fe3d-43c8-b78e-fc2eee29ef70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.055 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c33ad0c8-2708-4a9a-8a48-2631d793368e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.056 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-82b562d0-fe3d-43c8-b78e-fc2eee29ef70
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/82b562d0-fe3d-43c8-b78e-fc2eee29ef70.pid.haproxy
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 82b562d0-fe3d-43c8-b78e-fc2eee29ef70
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:17:26 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:26.057 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'env', 'PROCESS_TAG=haproxy-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82b562d0-fe3d-43c8-b78e-fc2eee29ef70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.069 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.092 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695846.0918717, 3a077761-3f4d-47af-aea2-9c3255ed7868 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.093 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] VM Started (Lifecycle Event)#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.097 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.112 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.115 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.121 189463 INFO nova.virt.libvirt.driver [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Instance spawned successfully.#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.122 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.127 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.142 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.143 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.143 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.144 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.144 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.144 189463 DEBUG nova.virt.libvirt.driver [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.148 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.148 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695846.0920148, 3a077761-3f4d-47af-aea2-9c3255ed7868 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.148 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.189 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.195 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695846.1017523, 3a077761-3f4d-47af-aea2-9c3255ed7868 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.195 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.223 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.229 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.239 189463 INFO nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Took 7.10 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.239 189463 DEBUG nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.250 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.302 189463 INFO nova.compute.manager [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Took 7.79 seconds to build instance.#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.323 189463 DEBUG oslo_concurrency.lockutils [None req-482927f1-9c73-4232-bedb-93cac8dd3386 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.456 189463 DEBUG nova.network.neutron [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated VIF entry in instance network info cache for port 68e04713-a4f3-481c-ba86-5b87fe8b2358. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.456 189463 DEBUG nova.network.neutron [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.470 189463 DEBUG oslo_concurrency.lockutils [req-4a6f63b0-1188-4c60-bfab-2eabaa0b31c1 req-c6e8f725-9f46-45f6-95c1-7308ffcc5b25 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.500 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Successfully updated port: b08a0c1b-2269-4358-92a6-a6be384d5bf6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.515 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.515 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquired lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:26 compute-0 nova_compute[189459]: 2025-12-02 17:17:26.515 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:17:26 compute-0 podman[254030]: 2025-12-02 17:17:26.548915006 +0000 UTC m=+0.090656910 container create ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  2 17:17:26 compute-0 systemd[1]: Started libpod-conmon-ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42.scope.
Dec  2 17:17:26 compute-0 podman[254030]: 2025-12-02 17:17:26.499647302 +0000 UTC m=+0.041389226 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:17:26 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc83dbfd8d58178d2795a0d0fe3a25999803f1770a67338433f2f68c2e555b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:17:26 compute-0 podman[254030]: 2025-12-02 17:17:26.672815344 +0000 UTC m=+0.214557258 container init ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 17:17:26 compute-0 podman[254030]: 2025-12-02 17:17:26.679930062 +0000 UTC m=+0.221671946 container start ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 17:17:26 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [NOTICE]   (254048) : New worker (254050) forked
Dec  2 17:17:26 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [NOTICE]   (254048) : Loading success.
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.168 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.285 189463 DEBUG nova.compute.manager [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-changed-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.285 189463 DEBUG nova.compute.manager [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Refreshing instance network info cache due to event network-changed-b08a0c1b-2269-4358-92a6-a6be384d5bf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.285 189463 DEBUG oslo_concurrency.lockutils [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.823 189463 DEBUG nova.network.neutron [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updating instance_info_cache with network_info: [{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.844 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Releasing lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.845 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Instance network_info: |[{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.845 189463 DEBUG oslo_concurrency.lockutils [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.845 189463 DEBUG nova.network.neutron [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Refreshing network info cache for port b08a0c1b-2269-4358-92a6-a6be384d5bf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.849 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Start _get_guest_xml network_info=[{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'b90f8403-6db1-4b01-bb62-c5b878a5c904'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.857 189463 WARNING nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.866 189463 DEBUG nova.virt.libvirt.host [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.867 189463 DEBUG nova.virt.libvirt.host [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.871 189463 DEBUG nova.virt.libvirt.host [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.872 189463 DEBUG nova.virt.libvirt.host [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.872 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.872 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:12:07Z,direct_url=<?>,disk_format='qcow2',id=b90f8403-6db1-4b01-bb62-c5b878a5c904,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='2f96d47197fa40f2a7126bf626847d74',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:12:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.873 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.873 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.873 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.874 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.874 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.874 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.874 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.875 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.875 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.875 189463 DEBUG nova.virt.hardware [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.879 189463 DEBUG nova.virt.libvirt.vif [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1299461533',display_name='tempest-TestServerBasicOps-server-1299461533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1299461533',id=14,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCs23Mb+Vk3wKBV8WjWEX3fZPocdwp7onjmdI+QXwb8YmPIjZmtTXjaOcMqr98JZL/Jqu/Le+rXG3IZ1AHlM7JFWl4QV48OuFEnMKIIJKeASaTqWqZeiyD9aIWCL1bDM9Q==',key_name='tempest-TestServerBasicOps-1986499285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c363afe1862442af83665e6092410d18',ramdisk_id='',reservation_id='r-6fmg9z8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2018574224',owner_user_name='tempest-TestServerBasicOps-2018574224-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:17:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='262f5a0a8792434786ece0f667375e02',uuid=d20422e7-48ac-4100-9ae0-4322baab5766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.879 189463 DEBUG nova.network.os_vif_util [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converting VIF {"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.880 189463 DEBUG nova.network.os_vif_util [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.881 189463 DEBUG nova.objects.instance [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid d20422e7-48ac-4100-9ae0-4322baab5766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.895 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <uuid>d20422e7-48ac-4100-9ae0-4322baab5766</uuid>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <name>instance-0000000e</name>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:name>tempest-TestServerBasicOps-server-1299461533</nova:name>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:17:27</nova:creationTime>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:user uuid="262f5a0a8792434786ece0f667375e02">tempest-TestServerBasicOps-2018574224-project-member</nova:user>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:project uuid="c363afe1862442af83665e6092410d18">tempest-TestServerBasicOps-2018574224</nova:project>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="b90f8403-6db1-4b01-bb62-c5b878a5c904"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        <nova:port uuid="b08a0c1b-2269-4358-92a6-a6be384d5bf6">
Dec  2 17:17:27 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <system>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="serial">d20422e7-48ac-4100-9ae0-4322baab5766</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="uuid">d20422e7-48ac-4100-9ae0-4322baab5766</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </system>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <os>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </os>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <features>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </features>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.config"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:0e:3b:98"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <target dev="tapb08a0c1b-22"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/console.log" append="off"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <video>
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </video>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:17:27 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:17:27 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:17:27 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:17:27 compute-0 nova_compute[189459]: </domain>
Dec  2 17:17:27 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.897 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Preparing to wait for external event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.897 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.898 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.898 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.899 189463 DEBUG nova.virt.libvirt.vif [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1299461533',display_name='tempest-TestServerBasicOps-server-1299461533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1299461533',id=14,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCs23Mb+Vk3wKBV8WjWEX3fZPocdwp7onjmdI+QXwb8YmPIjZmtTXjaOcMqr98JZL/Jqu/Le+rXG3IZ1AHlM7JFWl4QV48OuFEnMKIIJKeASaTqWqZeiyD9aIWCL1bDM9Q==',key_name='tempest-TestServerBasicOps-1986499285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c363afe1862442af83665e6092410d18',ramdisk_id='',reservation_id='r-6fmg9z8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2018574224',owner_user_name='tempest-TestServerBasicOps-2018574224-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:17:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='262f5a0a8792434786ece0f667375e02',uuid=d20422e7-48ac-4100-9ae0-4322baab5766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.900 189463 DEBUG nova.network.os_vif_util [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converting VIF {"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.901 189463 DEBUG nova.network.os_vif_util [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.902 189463 DEBUG os_vif [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.902 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.903 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.904 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.908 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.908 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb08a0c1b-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.909 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb08a0c1b-22, col_values=(('external_ids', {'iface-id': 'b08a0c1b-2269-4358-92a6-a6be384d5bf6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:3b:98', 'vm-uuid': 'd20422e7-48ac-4100-9ae0-4322baab5766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:27 compute-0 NetworkManager[56503]: <info>  [1764695847.9118] manager: (tapb08a0c1b-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.913 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.920 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:27 compute-0 nova_compute[189459]: 2025-12-02 17:17:27.922 189463 INFO os_vif [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22')#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.019 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.020 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.020 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] No VIF found with MAC fa:16:3e:0e:3b:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.020 189463 INFO nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Using config drive#033[00m
Dec  2 17:17:28 compute-0 podman[254062]: 2025-12-02 17:17:28.063046109 +0000 UTC m=+0.094098161 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.094 189463 DEBUG nova.compute.manager [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.094 189463 DEBUG oslo_concurrency.lockutils [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.094 189463 DEBUG oslo_concurrency.lockutils [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.094 189463 DEBUG oslo_concurrency.lockutils [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.094 189463 DEBUG nova.compute.manager [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] No waiting events found dispatching network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.095 189463 WARNING nova.compute.manager [req-8606f014-e6f2-4001-9a10-a874eee8a62d req-06a1ce43-8649-4ed8-af1c-368bbfd39080 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received unexpected event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.415 189463 INFO nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Creating config drive at /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.config#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.420 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp94b76vgu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.547 189463 DEBUG oslo_concurrency.processutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp94b76vgu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.553 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 kernel: tapb08a0c1b-22: entered promiscuous mode
Dec  2 17:17:28 compute-0 systemd-udevd[253974]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:17:28 compute-0 ovn_controller[97975]: 2025-12-02T17:17:28Z|00154|binding|INFO|Claiming lport b08a0c1b-2269-4358-92a6-a6be384d5bf6 for this chassis.
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.6210] manager: (tapb08a0c1b-22): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  2 17:17:28 compute-0 ovn_controller[97975]: 2025-12-02T17:17:28Z|00155|binding|INFO|b08a0c1b-2269-4358-92a6-a6be384d5bf6: Claiming fa:16:3e:0e:3b:98 10.100.0.4
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.623 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.6372] device (tapb08a0c1b-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.639 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:3b:98 10.100.0.4'], port_security=['fa:16:3e:0e:3b:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd20422e7-48ac-4100-9ae0-4322baab5766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c363afe1862442af83665e6092410d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c05ec07f-0fc3-4ae0-ae11-843209d86dab f17c8727-3b06-44be-8319-1c1a9cb3ffb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=731e4b34-a15c-43f9-a44d-c854f843df51, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b08a0c1b-2269-4358-92a6-a6be384d5bf6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.6429] device (tapb08a0c1b-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.643 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b08a0c1b-2269-4358-92a6-a6be384d5bf6 in datapath 13fd0660-de1e-41a8-9f35-c1dedb6628de bound to our chassis#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.645 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13fd0660-de1e-41a8-9f35-c1dedb6628de#033[00m
Dec  2 17:17:28 compute-0 systemd-machined[155878]: New machine qemu-15-instance-0000000e.
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.663 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[067e0cbd-24da-402e-9041-14777dba7e44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.664 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13fd0660-d1 in ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.666 240010 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13fd0660-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.666 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[692449d1-f281-4c96-b555-4113c8cecb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.668 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5afe5a0f-2082-4114-ac94-7137e8202989]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.681 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea789c9-99db-4741-a9ab-316da0570668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  2 17:17:28 compute-0 ovn_controller[97975]: 2025-12-02T17:17:28Z|00156|binding|INFO|Setting lport b08a0c1b-2269-4358-92a6-a6be384d5bf6 ovn-installed in OVS
Dec  2 17:17:28 compute-0 ovn_controller[97975]: 2025-12-02T17:17:28Z|00157|binding|INFO|Setting lport b08a0c1b-2269-4358-92a6-a6be384d5bf6 up in Southbound
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.686 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.702 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[9348422d-9b2c-4dff-950a-3be004498954]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.730 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[6d36c395-b9e0-4bbd-8c14-20ae5c17a0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 systemd-udevd[254103]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.737 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[ba351994-1405-485a-ba37-5e3268ea2cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.7393] manager: (tap13fd0660-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.772 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3def1c-110d-44ae-9dd8-961d53bf55ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.775 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1d006a-16cf-4c1c-9db5-567fc8fa7b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.8010] device (tap13fd0660-d0): carrier: link connected
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.807 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[22867191-7336-466f-9bbc-cfd6e2499db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.826 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[87abcd00-ccfb-4e46-b1a9-23ff1543d166]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13fd0660-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f9:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539733, 'reachable_time': 33134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254131, 'error': None, 'target': 'ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.842 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[15d491bd-e91d-4394-96f0-d4b1ef1107d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f9bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539733, 'tstamp': 539733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254132, 'error': None, 'target': 'ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.862 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2607dd2a-e001-4eb5-84eb-639992193f5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13fd0660-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f9:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539733, 'reachable_time': 33134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254133, 'error': None, 'target': 'ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.896 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[04188db4-6476-4050-96d7-22d08a1ed34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.967 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[4761c3f6-9943-4dda-be83-2ca2d15023c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.969 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13fd0660-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.970 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.971 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13fd0660-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.974 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 kernel: tap13fd0660-d0: entered promiscuous mode
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.977 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.982 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13fd0660-d0, col_values=(('external_ids', {'iface-id': '3eb26e54-3e90-4fbf-a88b-9cb98291cd67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.984 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:28 compute-0 NetworkManager[56503]: <info>  [1764695848.9850] manager: (tap13fd0660-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  2 17:17:28 compute-0 ovn_controller[97975]: 2025-12-02T17:17:28Z|00158|binding|INFO|Releasing lport 3eb26e54-3e90-4fbf-a88b-9cb98291cd67 from this chassis (sb_readonly=0)
Dec  2 17:17:28 compute-0 nova_compute[189459]: 2025-12-02 17:17:28.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.986 106835 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13fd0660-de1e-41a8-9f35-c1dedb6628de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13fd0660-de1e-41a8-9f35-c1dedb6628de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.988 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[08028ed8-9875-4e3d-b18b-af81f2f694bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.989 106835 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: global
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    log         /dev/log local0 debug
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    log-tag     haproxy-metadata-proxy-13fd0660-de1e-41a8-9f35-c1dedb6628de
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    user        root
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    group       root
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    maxconn     1024
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    pidfile     /var/lib/neutron/external/pids/13fd0660-de1e-41a8-9f35-c1dedb6628de.pid.haproxy
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    daemon
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: defaults
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    log global
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    mode http
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    option httplog
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    option dontlognull
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    option http-server-close
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    option forwardfor
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    retries                 3
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    timeout http-request    30s
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    timeout connect         30s
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    timeout client          32s
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    timeout server          32s
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    timeout http-keep-alive 30s
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: listen listener
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    bind 169.254.169.254:80
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]:    http-request add-header X-OVN-Network-ID 13fd0660-de1e-41a8-9f35-c1dedb6628de
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 17:17:29 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:17:28.989 106835 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'env', 'PROCESS_TAG=haproxy-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13fd0660-de1e-41a8-9f35-c1dedb6628de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.007 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.381 189463 DEBUG nova.compute.manager [req-9f557cbb-776b-4654-ba71-2f467bb12115 req-c042f3cd-8fff-4a72-a3c6-1fc1df430a19 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.382 189463 DEBUG oslo_concurrency.lockutils [req-9f557cbb-776b-4654-ba71-2f467bb12115 req-c042f3cd-8fff-4a72-a3c6-1fc1df430a19 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.383 189463 DEBUG oslo_concurrency.lockutils [req-9f557cbb-776b-4654-ba71-2f467bb12115 req-c042f3cd-8fff-4a72-a3c6-1fc1df430a19 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.383 189463 DEBUG oslo_concurrency.lockutils [req-9f557cbb-776b-4654-ba71-2f467bb12115 req-c042f3cd-8fff-4a72-a3c6-1fc1df430a19 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.384 189463 DEBUG nova.compute.manager [req-9f557cbb-776b-4654-ba71-2f467bb12115 req-c042f3cd-8fff-4a72-a3c6-1fc1df430a19 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Processing event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:17:29 compute-0 podman[254165]: 2025-12-02 17:17:29.431346054 +0000 UTC m=+0.093564006 container create 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  2 17:17:29 compute-0 podman[254165]: 2025-12-02 17:17:29.385704187 +0000 UTC m=+0.047922179 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 17:17:29 compute-0 systemd[1]: Started libpod-conmon-8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9.scope.
Dec  2 17:17:29 compute-0 systemd[1]: Started libcrun container.
Dec  2 17:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7d05bbd7c8c649e4c095a489730dc7242d6ad3bc2a69a7fbaa808828ef8ce1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 17:17:29 compute-0 podman[254165]: 2025-12-02 17:17:29.582965216 +0000 UTC m=+0.245183238 container init 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 17:17:29 compute-0 podman[254165]: 2025-12-02 17:17:29.592097428 +0000 UTC m=+0.254315400 container start 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.637 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.639 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695849.6384506, d20422e7-48ac-4100-9ae0-4322baab5766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.639 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] VM Started (Lifecycle Event)#033[00m
Dec  2 17:17:29 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [NOTICE]   (254191) : New worker (254193) forked
Dec  2 17:17:29 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [NOTICE]   (254191) : Loading success.
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.649 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.660 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.664 189463 INFO nova.virt.libvirt.driver [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Instance spawned successfully.#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.664 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.670 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.694 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.695 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695849.6385605, d20422e7-48ac-4100-9ae0-4322baab5766 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.696 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.700 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.701 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.702 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.702 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.703 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.704 189463 DEBUG nova.virt.libvirt.driver [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.711 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.717 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764695849.6484592, d20422e7-48ac-4100-9ae0-4322baab5766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.718 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.735 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.741 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:17:29 compute-0 podman[203941]: time="2025-12-02T17:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:17:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  2 17:17:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5229 "" "Go-http-client/1.1"
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.767 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.790 189463 INFO nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Took 5.51 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.791 189463 DEBUG nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.846 189463 INFO nova.compute.manager [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Took 6.01 seconds to build instance.#033[00m
Dec  2 17:17:29 compute-0 nova_compute[189459]: 2025-12-02 17:17:29.863 189463 DEBUG oslo_concurrency.lockutils [None req-b90b5b4e-6b5d-4347-8072-114dda0016ba 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:30 compute-0 nova_compute[189459]: 2025-12-02 17:17:30.212 189463 DEBUG nova.network.neutron [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updated VIF entry in instance network info cache for port b08a0c1b-2269-4358-92a6-a6be384d5bf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:17:30 compute-0 nova_compute[189459]: 2025-12-02 17:17:30.213 189463 DEBUG nova.network.neutron [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updating instance_info_cache with network_info: [{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:30 compute-0 nova_compute[189459]: 2025-12-02 17:17:30.240 189463 DEBUG oslo_concurrency.lockutils [req-48f99202-45aa-4f01-a1de-bc12c0bab3b4 req-7948022a-2a0a-4b91-a803-7102917be68b b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.413 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: ERROR   17:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: ERROR   17:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: ERROR   17:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: ERROR   17:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: ERROR   17:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:17:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.733 189463 DEBUG nova.compute.manager [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.735 189463 DEBUG oslo_concurrency.lockutils [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.736 189463 DEBUG oslo_concurrency.lockutils [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.736 189463 DEBUG oslo_concurrency.lockutils [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.737 189463 DEBUG nova.compute.manager [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] No waiting events found dispatching network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.738 189463 WARNING nova.compute.manager [req-f2477e5f-9e03-4120-b6cd-4114fc07cfac req-b6e150cd-2bcc-45c3-ac4e-dc33dce061e8 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received unexpected event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:17:31 compute-0 nova_compute[189459]: 2025-12-02 17:17:31.922 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:31 compute-0 NetworkManager[56503]: <info>  [1764695851.9231] manager: (patch-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec  2 17:17:31 compute-0 NetworkManager[56503]: <info>  [1764695851.9259] manager: (patch-br-int-to-provnet-a6ace200-ff03-4989-9ca5-1fe93cf690ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  2 17:17:32 compute-0 nova_compute[189459]: 2025-12-02 17:17:32.112 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:32 compute-0 ovn_controller[97975]: 2025-12-02T17:17:32Z|00159|binding|INFO|Releasing lport 3eb26e54-3e90-4fbf-a88b-9cb98291cd67 from this chassis (sb_readonly=0)
Dec  2 17:17:32 compute-0 ovn_controller[97975]: 2025-12-02T17:17:32Z|00160|binding|INFO|Releasing lport 3390bd6d-860e-4bcb-929b-c08f611343b9 from this chassis (sb_readonly=0)
Dec  2 17:17:32 compute-0 nova_compute[189459]: 2025-12-02 17:17:32.151 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:32 compute-0 nova_compute[189459]: 2025-12-02 17:17:32.912 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:33 compute-0 podman[254204]: 2025-12-02 17:17:33.28121714 +0000 UTC m=+0.101316372 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:17:33 compute-0 podman[254205]: 2025-12-02 17:17:33.297554542 +0000 UTC m=+0.116275647 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.414 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.415 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.542 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.547 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.548 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.550 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.553 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.821 189463 DEBUG nova.compute.manager [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-changed-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.825 189463 DEBUG nova.compute.manager [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Refreshing instance network info cache due to event network-changed-b08a0c1b-2269-4358-92a6-a6be384d5bf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.826 189463 DEBUG oslo_concurrency.lockutils [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.827 189463 DEBUG oslo_concurrency.lockutils [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:17:33 compute-0 nova_compute[189459]: 2025-12-02 17:17:33.828 189463 DEBUG nova.network.neutron [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Refreshing network info cache for port b08a0c1b-2269-4358-92a6-a6be384d5bf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:17:34 compute-0 nova_compute[189459]: 2025-12-02 17:17:34.729 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:34 compute-0 nova_compute[189459]: 2025-12-02 17:17:34.760 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:34 compute-0 nova_compute[189459]: 2025-12-02 17:17:34.761 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:17:35 compute-0 nova_compute[189459]: 2025-12-02 17:17:35.192 189463 DEBUG nova.network.neutron [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updated VIF entry in instance network info cache for port b08a0c1b-2269-4358-92a6-a6be384d5bf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 17:17:35 compute-0 nova_compute[189459]: 2025-12-02 17:17:35.193 189463 DEBUG nova.network.neutron [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updating instance_info_cache with network_info: [{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:17:35 compute-0 nova_compute[189459]: 2025-12-02 17:17:35.218 189463 DEBUG oslo_concurrency.lockutils [req-9eb4a8e5-6f2a-4270-8771-a9e4b55d5e49 req-90cf6930-45e8-4c9f-a0c1-ce384bf49a5c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:17:36 compute-0 nova_compute[189459]: 2025-12-02 17:17:36.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:36 compute-0 podman[254242]: 2025-12-02 17:17:36.564126464 +0000 UTC m=+0.107797813 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 17:17:36 compute-0 podman[254241]: 2025-12-02 17:17:36.572046674 +0000 UTC m=+0.110331071 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 17:17:36 compute-0 podman[254243]: 2025-12-02 17:17:36.575126575 +0000 UTC m=+0.103329105 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.442 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.444 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.540 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.640 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.641 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.736 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.754 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.851 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.853 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.913 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:17:37 compute-0 nova_compute[189459]: 2025-12-02 17:17:37.917 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.327 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.329 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=72.12464141845703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.329 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.330 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.410 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.411 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance d20422e7-48ac-4100-9ae0-4322baab5766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.413 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.414 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.502 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.519 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.548 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.549 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:17:38 compute-0 nova_compute[189459]: 2025-12-02 17:17:38.555 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:41 compute-0 nova_compute[189459]: 2025-12-02 17:17:41.550 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:42 compute-0 nova_compute[189459]: 2025-12-02 17:17:42.922 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:43 compute-0 nova_compute[189459]: 2025-12-02 17:17:43.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:43 compute-0 nova_compute[189459]: 2025-12-02 17:17:43.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:17:43 compute-0 nova_compute[189459]: 2025-12-02 17:17:43.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:17:43 compute-0 nova_compute[189459]: 2025-12-02 17:17:43.561 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:45 compute-0 podman[254311]: 2025-12-02 17:17:45.286887353 +0000 UTC m=+0.096946437 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:17:45 compute-0 podman[254310]: 2025-12-02 17:17:45.320199204 +0000 UTC m=+0.135941658 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:17:45 compute-0 podman[254309]: 2025-12-02 17:17:45.326064239 +0000 UTC m=+0.143004115 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:17:47 compute-0 nova_compute[189459]: 2025-12-02 17:17:47.925 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:48 compute-0 nova_compute[189459]: 2025-12-02 17:17:48.566 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:52 compute-0 nova_compute[189459]: 2025-12-02 17:17:52.929 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:53 compute-0 nova_compute[189459]: 2025-12-02 17:17:53.569 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:57 compute-0 nova_compute[189459]: 2025-12-02 17:17:57.934 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:58 compute-0 podman[254380]: 2025-12-02 17:17:58.310156723 +0000 UTC m=+0.118933798 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Dec  2 17:17:58 compute-0 nova_compute[189459]: 2025-12-02 17:17:58.571 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:17:59 compute-0 podman[203941]: time="2025-12-02T17:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:17:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  2 17:17:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5242 "" "Go-http-client/1.1"
Dec  2 17:18:01 compute-0 ovn_controller[97975]: 2025-12-02T17:18:01Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:75:a2 10.100.3.185
Dec  2 17:18:01 compute-0 ovn_controller[97975]: 2025-12-02T17:18:01Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:75:a2 10.100.3.185
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: ERROR   17:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: ERROR   17:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: ERROR   17:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: ERROR   17:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: ERROR   17:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:18:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:18:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:01.888 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:01.890 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:01.891 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:01 compute-0 ovn_controller[97975]: 2025-12-02T17:18:01Z|00161|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  2 17:18:02 compute-0 nova_compute[189459]: 2025-12-02 17:18:02.938 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:03 compute-0 nova_compute[189459]: 2025-12-02 17:18:03.574 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:04 compute-0 podman[254434]: 2025-12-02 17:18:04.244006659 +0000 UTC m=+0.078480457 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 17:18:04 compute-0 podman[254435]: 2025-12-02 17:18:04.272485343 +0000 UTC m=+0.097588024 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  2 17:18:04 compute-0 ovn_controller[97975]: 2025-12-02T17:18:04Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:3b:98 10.100.0.4
Dec  2 17:18:04 compute-0 ovn_controller[97975]: 2025-12-02T17:18:04Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:3b:98 10.100.0.4
Dec  2 17:18:07 compute-0 podman[254472]: 2025-12-02 17:18:07.247014517 +0000 UTC m=+0.071374269 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:18:07 compute-0 podman[254473]: 2025-12-02 17:18:07.26526322 +0000 UTC m=+0.084718153 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  2 17:18:07 compute-0 podman[254471]: 2025-12-02 17:18:07.265943258 +0000 UTC m=+0.090105505 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  2 17:18:07 compute-0 nova_compute[189459]: 2025-12-02 17:18:07.943 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:08 compute-0 nova_compute[189459]: 2025-12-02 17:18:08.578 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:12 compute-0 nova_compute[189459]: 2025-12-02 17:18:12.947 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:13 compute-0 nova_compute[189459]: 2025-12-02 17:18:13.579 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:16 compute-0 podman[254529]: 2025-12-02 17:18:16.253854994 +0000 UTC m=+0.071551894 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:18:16 compute-0 podman[254535]: 2025-12-02 17:18:16.258697062 +0000 UTC m=+0.068584195 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:18:16 compute-0 podman[254528]: 2025-12-02 17:18:16.322821729 +0000 UTC m=+0.150732069 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 17:18:17 compute-0 nova_compute[189459]: 2025-12-02 17:18:17.951 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:18 compute-0 nova_compute[189459]: 2025-12-02 17:18:18.584 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:22 compute-0 nova_compute[189459]: 2025-12-02 17:18:22.956 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:23 compute-0 nova_compute[189459]: 2025-12-02 17:18:23.587 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:27 compute-0 nova_compute[189459]: 2025-12-02 17:18:27.962 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:28 compute-0 nova_compute[189459]: 2025-12-02 17:18:28.587 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:29 compute-0 podman[254599]: 2025-12-02 17:18:29.276436045 +0000 UTC m=+0.093666580 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  2 17:18:29 compute-0 podman[203941]: time="2025-12-02T17:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:18:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec  2 17:18:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5249 "" "Go-http-client/1.1"
Dec  2 17:18:31 compute-0 nova_compute[189459]: 2025-12-02 17:18:31.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: ERROR   17:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: ERROR   17:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: ERROR   17:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: ERROR   17:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: ERROR   17:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:18:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:18:32 compute-0 nova_compute[189459]: 2025-12-02 17:18:32.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:32 compute-0 nova_compute[189459]: 2025-12-02 17:18:32.967 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:33 compute-0 nova_compute[189459]: 2025-12-02 17:18:33.590 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:35 compute-0 podman[254618]: 2025-12-02 17:18:35.293614176 +0000 UTC m=+0.102384290 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:18:35 compute-0 podman[254619]: 2025-12-02 17:18:35.305052499 +0000 UTC m=+0.114882551 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 17:18:35 compute-0 nova_compute[189459]: 2025-12-02 17:18:35.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:35 compute-0 nova_compute[189459]: 2025-12-02 17:18:35.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:18:35 compute-0 nova_compute[189459]: 2025-12-02 17:18:35.623 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:18:35 compute-0 nova_compute[189459]: 2025-12-02 17:18:35.624 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:18:35 compute-0 nova_compute[189459]: 2025-12-02 17:18:35.625 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:36.790 106942 DEBUG eventlet.wsgi.server [-] (106942) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:36.791 106942 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: Accept: */*#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: Connection: close#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: Content-Type: text/plain#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: Host: 169.254.169.254#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: User-Agent: curl/7.84.0#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: X-Forwarded-For: 10.100.0.4#015
Dec  2 17:18:36 compute-0 ovn_metadata_agent[106830]: X-Ovn-Network-Id: 13fd0660-de1e-41a8-9f35-c1dedb6628de __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  2 17:18:37 compute-0 nova_compute[189459]: 2025-12-02 17:18:37.969 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:38 compute-0 podman[254656]: 2025-12-02 17:18:38.256758879 +0000 UTC m=+0.079316459 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 17:18:38 compute-0 podman[254654]: 2025-12-02 17:18:38.256949814 +0000 UTC m=+0.086327415 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 17:18:38 compute-0 podman[254655]: 2025-12-02 17:18:38.269906057 +0000 UTC m=+0.096287278 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, container_name=kepler)
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.565 106942 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.566 106942 INFO eventlet.wsgi.server [-] 10.100.0.4,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.7747729#033[00m
Dec  2 17:18:38 compute-0 haproxy-metadata-proxy-13fd0660-de1e-41a8-9f35-c1dedb6628de[254193]: 10.100.0.4:37850 [02/Dec/2025:17:18:36.788] listener listener/metadata 0/0/0/1777/1777 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  2 17:18:38 compute-0 nova_compute[189459]: 2025-12-02 17:18:38.595 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:38 compute-0 nova_compute[189459]: 2025-12-02 17:18:38.680 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updating instance_info_cache with network_info: [{"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.682 106942 DEBUG eventlet.wsgi.server [-] (106942) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.683 106942 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: Accept: */*#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: Connection: close#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: Content-Length: 100#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: Content-Type: application/x-www-form-urlencoded#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: Host: 169.254.169.254#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: User-Agent: curl/7.84.0#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: X-Forwarded-For: 10.100.0.4#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: X-Ovn-Network-Id: 13fd0660-de1e-41a8-9f35-c1dedb6628de#015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: #015
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  2 17:18:38 compute-0 nova_compute[189459]: 2025-12-02 17:18:38.701 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-d20422e7-48ac-4100-9ae0-4322baab5766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:18:38 compute-0 nova_compute[189459]: 2025-12-02 17:18:38.701 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:18:38 compute-0 nova_compute[189459]: 2025-12-02 17:18:38.702 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.901 106942 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  2 17:18:38 compute-0 haproxy-metadata-proxy-13fd0660-de1e-41a8-9f35-c1dedb6628de[254193]: 10.100.0.4:37866 [02/Dec/2025:17:18:38.681] listener listener/metadata 0/0/0/221/221 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  2 17:18:38 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:38.904 106942 INFO eventlet.wsgi.server [-] 10.100.0.4,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2207167#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.451 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.452 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.453 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.453 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.539 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.637 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.639 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.711 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.720 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.798 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.799 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:18:39 compute-0 nova_compute[189459]: 2025-12-02 17:18:39.863 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.247 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.249 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4988MB free_disk=72.0683822631836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.249 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.249 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.338 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.338 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance d20422e7-48ac-4100-9ae0-4322baab5766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.339 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.339 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.425 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.448 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.451 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:18:40 compute-0 nova_compute[189459]: 2025-12-02 17:18:40.452 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.372 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.374 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.375 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.376 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.377 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.380 189463 INFO nova.compute.manager [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Terminating instance#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.382 189463 DEBUG nova.compute.manager [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:18:41 compute-0 kernel: tapb08a0c1b-22 (unregistering): left promiscuous mode
Dec  2 17:18:41 compute-0 NetworkManager[56503]: <info>  [1764695921.4181] device (tapb08a0c1b-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.431 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 ovn_controller[97975]: 2025-12-02T17:18:41Z|00162|binding|INFO|Releasing lport b08a0c1b-2269-4358-92a6-a6be384d5bf6 from this chassis (sb_readonly=0)
Dec  2 17:18:41 compute-0 ovn_controller[97975]: 2025-12-02T17:18:41Z|00163|binding|INFO|Setting lport b08a0c1b-2269-4358-92a6-a6be384d5bf6 down in Southbound
Dec  2 17:18:41 compute-0 ovn_controller[97975]: 2025-12-02T17:18:41Z|00164|binding|INFO|Removing iface tapb08a0c1b-22 ovn-installed in OVS
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.441 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.447 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:3b:98 10.100.0.4'], port_security=['fa:16:3e:0e:3b:98 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd20422e7-48ac-4100-9ae0-4322baab5766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c363afe1862442af83665e6092410d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c05ec07f-0fc3-4ae0-ae11-843209d86dab f17c8727-3b06-44be-8319-1c1a9cb3ffb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=731e4b34-a15c-43f9-a44d-c854f843df51, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b08a0c1b-2269-4358-92a6-a6be384d5bf6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.449 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b08a0c1b-2269-4358-92a6-a6be384d5bf6 in datapath 13fd0660-de1e-41a8-9f35-c1dedb6628de unbound from our chassis#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.451 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13fd0660-de1e-41a8-9f35-c1dedb6628de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.455 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e12368-937d-4a8e-bba3-4a4dabc0e8c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.457 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de namespace which is not needed anymore#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.472 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  2 17:18:41 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 43.026s CPU time.
Dec  2 17:18:41 compute-0 systemd-machined[155878]: Machine qemu-15-instance-0000000e terminated.
Dec  2 17:18:41 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [NOTICE]   (254191) : haproxy version is 2.8.14-c23fe91
Dec  2 17:18:41 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [NOTICE]   (254191) : path to executable is /usr/sbin/haproxy
Dec  2 17:18:41 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [WARNING]  (254191) : Exiting Master process...
Dec  2 17:18:41 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [ALERT]    (254191) : Current worker (254193) exited with code 143 (Terminated)
Dec  2 17:18:41 compute-0 neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de[254183]: [WARNING]  (254191) : All workers exited. Exiting... (0)
Dec  2 17:18:41 compute-0 systemd[1]: libpod-8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9.scope: Deactivated successfully.
Dec  2 17:18:41 compute-0 conmon[254183]: conmon 8513041393e6b67b10d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9.scope/container/memory.events
Dec  2 17:18:41 compute-0 podman[254744]: 2025-12-02 17:18:41.667118606 +0000 UTC m=+0.085949625 container died 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.675 189463 INFO nova.virt.libvirt.driver [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Instance destroyed successfully.#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.676 189463 DEBUG nova.objects.instance [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lazy-loading 'resources' on Instance uuid d20422e7-48ac-4100-9ae0-4322baab5766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.692 189463 DEBUG nova.virt.libvirt.vif [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1299461533',display_name='tempest-TestServerBasicOps-server-1299461533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1299461533',id=14,image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCs23Mb+Vk3wKBV8WjWEX3fZPocdwp7onjmdI+QXwb8YmPIjZmtTXjaOcMqr98JZL/Jqu/Le+rXG3IZ1AHlM7JFWl4QV48OuFEnMKIIJKeASaTqWqZeiyD9aIWCL1bDM9Q==',key_name='tempest-TestServerBasicOps-1986499285',keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c363afe1862442af83665e6092410d18',ramdisk_id='',reservation_id='r-6fmg9z8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b90f8403-6db1-4b01-bb62-c5b878a5c904',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-2018574224',owner_user_name='tempest-TestServerBasicOps-2018574224-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:18:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='262f5a0a8792434786ece0f667375e02',uuid=d20422e7-48ac-4100-9ae0-4322baab5766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.693 189463 DEBUG nova.network.os_vif_util [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converting VIF {"id": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "address": "fa:16:3e:0e:3b:98", "network": {"id": "13fd0660-de1e-41a8-9f35-c1dedb6628de", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1527406235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c363afe1862442af83665e6092410d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb08a0c1b-22", "ovs_interfaceid": "b08a0c1b-2269-4358-92a6-a6be384d5bf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.694 189463 DEBUG nova.network.os_vif_util [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.695 189463 DEBUG os_vif [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.698 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.698 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb08a0c1b-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.702 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.703 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.706 189463 INFO os_vif [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:3b:98,bridge_name='br-int',has_traffic_filtering=True,id=b08a0c1b-2269-4358-92a6-a6be384d5bf6,network=Network(13fd0660-de1e-41a8-9f35-c1dedb6628de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb08a0c1b-22')#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.706 189463 INFO nova.virt.libvirt.driver [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Deleting instance files /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766_del#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.707 189463 INFO nova.virt.libvirt.driver [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Deletion of /var/lib/nova/instances/d20422e7-48ac-4100-9ae0-4322baab5766_del complete#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.714 189463 DEBUG nova.compute.manager [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-unplugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.714 189463 DEBUG oslo_concurrency.lockutils [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.714 189463 DEBUG oslo_concurrency.lockutils [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.714 189463 DEBUG oslo_concurrency.lockutils [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.714 189463 DEBUG nova.compute.manager [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] No waiting events found dispatching network-vif-unplugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.715 189463 DEBUG nova.compute.manager [req-94bc775c-6079-4550-ae1a-84ca8912b119 req-4aa7c2fd-c335-4d6c-83b3-08dbb95d4806 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-unplugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9-userdata-shm.mount: Deactivated successfully.
Dec  2 17:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d7d05bbd7c8c649e4c095a489730dc7242d6ad3bc2a69a7fbaa808828ef8ce1-merged.mount: Deactivated successfully.
Dec  2 17:18:41 compute-0 podman[254744]: 2025-12-02 17:18:41.748188961 +0000 UTC m=+0.167019980 container cleanup 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  2 17:18:41 compute-0 systemd[1]: libpod-conmon-8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9.scope: Deactivated successfully.
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.784 189463 INFO nova.compute.manager [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.785 189463 DEBUG oslo.service.loopingcall [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.785 189463 DEBUG nova.compute.manager [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.785 189463 DEBUG nova.network.neutron [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:18:41 compute-0 podman[254787]: 2025-12-02 17:18:41.82752233 +0000 UTC m=+0.052709146 container remove 8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.835 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[07f4c12e-c26f-431e-888f-a046556ad06f]: (4, ('Tue Dec  2 05:18:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de (8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9)\n8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9\nTue Dec  2 05:18:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de (8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9)\n8513041393e6b67b10d2cc6842db3355425896b832b7cea103e5b256fff3b0c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.837 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[15b81e8b-d36f-4841-884a-e68d87e0b720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.839 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13fd0660-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.843 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 kernel: tap13fd0660-d0: left promiscuous mode
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.861 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 nova_compute[189459]: 2025-12-02 17:18:41.864 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.866 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[795bd904-c704-46db-9e50-0b633559f215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.888 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[f84c3fc7-00cf-47ce-b6b2-bf031d39adbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.891 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f71370-c326-4e6a-a5c7-566c624bc716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.912 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[48d9d590-3853-45a8-8980-51da16e9a883]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539725, 'reachable_time': 32992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254802, 'error': None, 'target': 'ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d13fd0660\x2dde1e\x2d41a8\x2d9f35\x2dc1dedb6628de.mount: Deactivated successfully.
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.928 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13fd0660-de1e-41a8-9f35-c1dedb6628de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:18:41 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:41.928 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3513a7-e603-4015-a172-8579e17a62c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:18:42 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:42.413 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:18:42 compute-0 nova_compute[189459]: 2025-12-02 17:18:42.415 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:42 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:42.416 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:18:42 compute-0 nova_compute[189459]: 2025-12-02 17:18:42.433 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:42 compute-0 nova_compute[189459]: 2025-12-02 17:18:42.956 189463 DEBUG nova.network.neutron [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:18:42 compute-0 nova_compute[189459]: 2025-12-02 17:18:42.975 189463 INFO nova.compute.manager [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Took 1.19 seconds to deallocate network for instance.#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.043 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.043 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.127 189463 DEBUG nova.compute.provider_tree [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.140 189463 DEBUG nova.scheduler.client.report [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.167 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.200 189463 INFO nova.scheduler.client.report [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Deleted allocations for instance d20422e7-48ac-4100-9ae0-4322baab5766#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.257 189463 DEBUG oslo_concurrency.lockutils [None req-600f90a6-6bc9-4651-b841-4fb6b0d121f3 262f5a0a8792434786ece0f667375e02 c363afe1862442af83665e6092410d18 - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.595 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.787 189463 DEBUG nova.compute.manager [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.788 189463 DEBUG oslo_concurrency.lockutils [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.788 189463 DEBUG oslo_concurrency.lockutils [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.788 189463 DEBUG oslo_concurrency.lockutils [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "d20422e7-48ac-4100-9ae0-4322baab5766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.789 189463 DEBUG nova.compute.manager [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] No waiting events found dispatching network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.789 189463 WARNING nova.compute.manager [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received unexpected event network-vif-plugged-b08a0c1b-2269-4358-92a6-a6be384d5bf6 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:18:43 compute-0 nova_compute[189459]: 2025-12-02 17:18:43.789 189463 DEBUG nova.compute.manager [req-3e13c4fc-3c8b-4007-a141-0f4711f3ab30 req-b0a0160a-2b0d-46bd-a544-b66104af77e5 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Received event network-vif-deleted-b08a0c1b-2269-4358-92a6-a6be384d5bf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:18:45 compute-0 nova_compute[189459]: 2025-12-02 17:18:45.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:18:46 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:18:46.418 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:18:46 compute-0 nova_compute[189459]: 2025-12-02 17:18:46.703 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:47 compute-0 podman[254804]: 2025-12-02 17:18:47.297663128 +0000 UTC m=+0.112708863 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:18:47 compute-0 podman[254805]: 2025-12-02 17:18:47.318768746 +0000 UTC m=+0.115482896 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:18:47 compute-0 podman[254803]: 2025-12-02 17:18:47.354318077 +0000 UTC m=+0.173823460 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:18:48 compute-0 nova_compute[189459]: 2025-12-02 17:18:48.598 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:49 compute-0 ovn_controller[97975]: 2025-12-02T17:18:49Z|00165|binding|INFO|Releasing lport 3390bd6d-860e-4bcb-929b-c08f611343b9 from this chassis (sb_readonly=0)
Dec  2 17:18:49 compute-0 nova_compute[189459]: 2025-12-02 17:18:49.906 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:50 compute-0 ovn_controller[97975]: 2025-12-02T17:18:50Z|00166|binding|INFO|Releasing lport 3390bd6d-860e-4bcb-929b-c08f611343b9 from this chassis (sb_readonly=0)
Dec  2 17:18:50 compute-0 nova_compute[189459]: 2025-12-02 17:18:50.137 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:51 compute-0 nova_compute[189459]: 2025-12-02 17:18:51.709 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:53 compute-0 nova_compute[189459]: 2025-12-02 17:18:53.601 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:56 compute-0 nova_compute[189459]: 2025-12-02 17:18:56.672 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764695921.6695163, d20422e7-48ac-4100-9ae0-4322baab5766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:18:56 compute-0 nova_compute[189459]: 2025-12-02 17:18:56.673 189463 INFO nova.compute.manager [-] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:18:56 compute-0 nova_compute[189459]: 2025-12-02 17:18:56.706 189463 DEBUG nova.compute.manager [None req-3137d2af-82fc-4c04-8eeb-9ad8336964aa - - - - - -] [instance: d20422e7-48ac-4100-9ae0-4322baab5766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:18:56 compute-0 nova_compute[189459]: 2025-12-02 17:18:56.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:58 compute-0 nova_compute[189459]: 2025-12-02 17:18:58.603 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:18:59 compute-0 podman[203941]: time="2025-12-02T17:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:18:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:18:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec  2 17:19:00 compute-0 podman[254875]: 2025-12-02 17:19:00.280874007 +0000 UTC m=+0.097440999 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git)
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: ERROR   17:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: ERROR   17:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: ERROR   17:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: ERROR   17:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: ERROR   17:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:19:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:19:01 compute-0 nova_compute[189459]: 2025-12-02 17:19:01.716 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:01.889 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:19:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:01.890 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:19:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:01.891 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.056 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.058 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.066 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3a077761-3f4d-47af-aea2-9c3255ed7868 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.068 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3a077761-3f4d-47af-aea2-9c3255ed7868 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:19:03 compute-0 nova_compute[189459]: 2025-12-02 17:19:03.607 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.652 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 02 Dec 2025 17:19:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0650d60f-9b65-41b3-aa69-5fbceed807ab x-openstack-request-id: req-0650d60f-9b65-41b3-aa69-5fbceed807ab _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.653 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3a077761-3f4d-47af-aea2-9c3255ed7868", "name": "te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3", "status": "ACTIVE", "tenant_id": "d97265454999468fb261510e60c81b0e", "user_id": "5673ab6de24147cb96ea139c0ad6cb0e", "metadata": {"metering.server_group": "bb3de81f-f629-45e4-a58b-8725288b0515"}, "hostId": "24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b", "image": {"id": "53890fe7-10ca-4d2d-8959-827e6ad0a9a2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53890fe7-10ca-4d2d-8959-827e6ad0a9a2"}]}, "flavor": {"id": "8e4a4b21-ee56-489d-aeb9-f21b8412f996", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8e4a4b21-ee56-489d-aeb9-f21b8412f996"}]}, "created": "2025-12-02T17:17:15Z", "updated": "2025-12-02T17:17:26Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.185", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:66:75:a2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3a077761-3f4d-47af-aea2-9c3255ed7868"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3a077761-3f4d-47af-aea2-9c3255ed7868"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T17:17:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.653 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3a077761-3f4d-47af-aea2-9c3255ed7868 used request id req-0650d60f-9b65-41b3-aa69-5fbceed807ab request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.654 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.655 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:19:03.655814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.662 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3a077761-3f4d-47af-aea2-9c3255ed7868 / tap68e04713-a4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.663 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.664 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.665 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.666 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:19:03.664613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:19:03.666317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.686 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.687 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:19:03.688900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.715 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 95730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:19:03.716949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.768 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 28728320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.768 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.769 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 663212234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 67026549 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.771 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1028 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.771 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.771 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:19:03.769689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:19:03.770941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.772 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.773 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.775 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.775 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:19:03.772512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:19:03.773782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:19:03.774888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3584130836 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.776 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.777 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:19:03.776210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:19:03.777790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:19:03.779030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:19:03.780161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3>]
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T17:19:03.781090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:19:03.782407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.783 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:19:03.783538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:19:03.784479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:19:03.785423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.786 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:19:03.786598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:19:03.787778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.788 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:19:03.788797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.789 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 43.359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:19:03.789784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T17:19:03.791025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3>]
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:19:03.792014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.793 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.793 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:19:03.792963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:19:03.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:19:06 compute-0 podman[254904]: 2025-12-02 17:19:06.332065719 +0000 UTC m=+0.138073595 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  2 17:19:06 compute-0 podman[254905]: 2025-12-02 17:19:06.357740298 +0000 UTC m=+0.156294717 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:19:06 compute-0 nova_compute[189459]: 2025-12-02 17:19:06.723 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:08 compute-0 nova_compute[189459]: 2025-12-02 17:19:08.610 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:09 compute-0 podman[254942]: 2025-12-02 17:19:09.270639562 +0000 UTC m=+0.072366196 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:19:09 compute-0 podman[254941]: 2025-12-02 17:19:09.272665715 +0000 UTC m=+0.092851017 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:19:09 compute-0 podman[254940]: 2025-12-02 17:19:09.299740502 +0000 UTC m=+0.111593924 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  2 17:19:11 compute-0 nova_compute[189459]: 2025-12-02 17:19:11.728 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:13 compute-0 nova_compute[189459]: 2025-12-02 17:19:13.612 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:16 compute-0 nova_compute[189459]: 2025-12-02 17:19:16.733 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:18 compute-0 podman[254995]: 2025-12-02 17:19:18.281774392 +0000 UTC m=+0.077538533 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:19:18 compute-0 podman[254994]: 2025-12-02 17:19:18.295256668 +0000 UTC m=+0.089655683 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:19:18 compute-0 podman[254993]: 2025-12-02 17:19:18.351007803 +0000 UTC m=+0.159686506 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:19:18 compute-0 nova_compute[189459]: 2025-12-02 17:19:18.614 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:21 compute-0 nova_compute[189459]: 2025-12-02 17:19:21.736 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:23 compute-0 nova_compute[189459]: 2025-12-02 17:19:23.616 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:26 compute-0 nova_compute[189459]: 2025-12-02 17:19:26.739 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:28 compute-0 nova_compute[189459]: 2025-12-02 17:19:28.619 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:29 compute-0 podman[203941]: time="2025-12-02T17:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:19:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:19:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Dec  2 17:19:31 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 17:19:31 compute-0 podman[255061]: 2025-12-02 17:19:31.199705014 +0000 UTC m=+0.096682359 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9)
Dec  2 17:19:31 compute-0 nova_compute[189459]: 2025-12-02 17:19:31.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: ERROR   17:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: ERROR   17:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: ERROR   17:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: ERROR   17:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: ERROR   17:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:19:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:19:31 compute-0 nova_compute[189459]: 2025-12-02 17:19:31.742 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:33 compute-0 nova_compute[189459]: 2025-12-02 17:19:33.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:33 compute-0 nova_compute[189459]: 2025-12-02 17:19:33.621 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.413 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.414 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.567 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.569 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.570 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.572 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:19:36 compute-0 nova_compute[189459]: 2025-12-02 17:19:36.746 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:37 compute-0 podman[255081]: 2025-12-02 17:19:37.281736051 +0000 UTC m=+0.097839539 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:19:37 compute-0 podman[255080]: 2025-12-02 17:19:37.284975647 +0000 UTC m=+0.102234246 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:19:38 compute-0 nova_compute[189459]: 2025-12-02 17:19:38.624 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:39 compute-0 nova_compute[189459]: 2025-12-02 17:19:39.860 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:19:39 compute-0 nova_compute[189459]: 2025-12-02 17:19:39.879 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:19:39 compute-0 nova_compute[189459]: 2025-12-02 17:19:39.880 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:19:39 compute-0 nova_compute[189459]: 2025-12-02 17:19:39.881 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:40 compute-0 podman[255117]: 2025-12-02 17:19:40.259948223 +0000 UTC m=+0.078842737 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:19:40 compute-0 podman[255116]: 2025-12-02 17:19:40.262792919 +0000 UTC m=+0.084224900 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec  2 17:19:40 compute-0 podman[255115]: 2025-12-02 17:19:40.272944427 +0000 UTC m=+0.093650209 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.438 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.439 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.440 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.530 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.610 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.611 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.677 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:19:41 compute-0 nova_compute[189459]: 2025-12-02 17:19:41.751 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.074 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.077 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5076MB free_disk=72.09354400634766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.078 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.078 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.167 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.168 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.169 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.227 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.248 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.273 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.275 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:19:42 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:42.719 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:19:42 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:42.720 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:19:42 compute-0 nova_compute[189459]: 2025-12-02 17:19:42.724 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:43 compute-0 nova_compute[189459]: 2025-12-02 17:19:43.276 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:43 compute-0 nova_compute[189459]: 2025-12-02 17:19:43.625 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:44 compute-0 nova_compute[189459]: 2025-12-02 17:19:44.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:44 compute-0 nova_compute[189459]: 2025-12-02 17:19:44.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:19:46 compute-0 nova_compute[189459]: 2025-12-02 17:19:46.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:46 compute-0 nova_compute[189459]: 2025-12-02 17:19:46.758 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:48 compute-0 nova_compute[189459]: 2025-12-02 17:19:48.628 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:49 compute-0 podman[255181]: 2025-12-02 17:19:49.299997729 +0000 UTC m=+0.108220824 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:19:49 compute-0 podman[255180]: 2025-12-02 17:19:49.324031445 +0000 UTC m=+0.132356323 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:19:49 compute-0 podman[255179]: 2025-12-02 17:19:49.34084672 +0000 UTC m=+0.163448426 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Dec  2 17:19:51 compute-0 nova_compute[189459]: 2025-12-02 17:19:51.763 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:52 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:19:52.723 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:19:53 compute-0 nova_compute[189459]: 2025-12-02 17:19:53.632 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:56 compute-0 nova_compute[189459]: 2025-12-02 17:19:56.770 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:58 compute-0 nova_compute[189459]: 2025-12-02 17:19:58.635 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.410 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.411 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.411 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.412 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.412 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.412 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.446 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.466 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.467 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Image id 53890fe7-10ca-4d2d-8959-827e6ad0a9a2 yields fingerprint 30c7a5bf10b220ad4028f2d500ff77f76aa72dba _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.467 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] image 53890fe7-10ca-4d2d-8959-827e6ad0a9a2 at (/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba): checking#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.468 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] image 53890fe7-10ca-4d2d-8959-827e6ad0a9a2 at (/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.470 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.471 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] 3a077761-3f4d-47af-aea2-9c3255ed7868 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.471 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] 3a077761-3f4d-47af-aea2-9c3255ed7868 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.472 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.539 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.541 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 is backed by 30c7a5bf10b220ad4028f2d500ff77f76aa72dba _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.542 189463 WARNING nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.543 189463 WARNING nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.544 189463 WARNING nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.546 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Active base files: /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.547 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Removable base files: /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31 /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6 /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.549 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f75af7a5e837c1ca61378fc78133e18a40f43f31#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.550 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a2d15f7c2922ae6c8da2b52b57bb19145907dde6#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.550 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/32bc5c5b2a17e06e78561597f1b90498e3f742b7#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.551 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.551 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.551 189463 DEBUG nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  2 17:19:59 compute-0 nova_compute[189459]: 2025-12-02 17:19:59.552 189463 INFO nova.virt.libvirt.imagecache [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  2 17:19:59 compute-0 podman[203941]: time="2025-12-02T17:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:19:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:19:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: ERROR   17:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: ERROR   17:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: ERROR   17:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: ERROR   17:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: ERROR   17:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:20:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:20:01 compute-0 nova_compute[189459]: 2025-12-02 17:20:01.773 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:20:01.890 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:20:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:20:01.891 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:20:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:20:01.892 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:20:02 compute-0 podman[255253]: 2025-12-02 17:20:02.235408444 +0000 UTC m=+0.070374474 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  2 17:20:03 compute-0 nova_compute[189459]: 2025-12-02 17:20:03.636 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:06 compute-0 nova_compute[189459]: 2025-12-02 17:20:06.778 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:08 compute-0 podman[255273]: 2025-12-02 17:20:08.276723503 +0000 UTC m=+0.100384857 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:20:08 compute-0 podman[255274]: 2025-12-02 17:20:08.312303795 +0000 UTC m=+0.127017802 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  2 17:20:08 compute-0 nova_compute[189459]: 2025-12-02 17:20:08.640 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:11 compute-0 podman[255311]: 2025-12-02 17:20:11.261336875 +0000 UTC m=+0.070662371 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  2 17:20:11 compute-0 podman[255309]: 2025-12-02 17:20:11.275832438 +0000 UTC m=+0.091524772 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:20:11 compute-0 podman[255310]: 2025-12-02 17:20:11.304646141 +0000 UTC m=+0.123100629 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, version=9.4, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:20:11 compute-0 nova_compute[189459]: 2025-12-02 17:20:11.783 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:13 compute-0 nova_compute[189459]: 2025-12-02 17:20:13.643 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:16 compute-0 nova_compute[189459]: 2025-12-02 17:20:16.789 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:18 compute-0 nova_compute[189459]: 2025-12-02 17:20:18.647 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:20 compute-0 podman[255367]: 2025-12-02 17:20:20.282293666 +0000 UTC m=+0.090807224 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:20:20 compute-0 podman[255368]: 2025-12-02 17:20:20.309408263 +0000 UTC m=+0.113778931 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:20:20 compute-0 podman[255366]: 2025-12-02 17:20:20.390303463 +0000 UTC m=+0.205484528 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:20:21 compute-0 nova_compute[189459]: 2025-12-02 17:20:21.792 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:22 compute-0 ovn_controller[97975]: 2025-12-02T17:20:22Z|00167|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  2 17:20:23 compute-0 nova_compute[189459]: 2025-12-02 17:20:23.649 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:26 compute-0 nova_compute[189459]: 2025-12-02 17:20:26.795 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:28 compute-0 nova_compute[189459]: 2025-12-02 17:20:28.652 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:29 compute-0 podman[203941]: time="2025-12-02T17:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:20:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:20:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: ERROR   17:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: ERROR   17:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: ERROR   17:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: ERROR   17:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: ERROR   17:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:20:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:20:31 compute-0 nova_compute[189459]: 2025-12-02 17:20:31.552 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:31 compute-0 nova_compute[189459]: 2025-12-02 17:20:31.800 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:33 compute-0 podman[255432]: 2025-12-02 17:20:33.279340664 +0000 UTC m=+0.101817632 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  2 17:20:33 compute-0 nova_compute[189459]: 2025-12-02 17:20:33.656 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:34 compute-0 nova_compute[189459]: 2025-12-02 17:20:34.412 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:36 compute-0 nova_compute[189459]: 2025-12-02 17:20:36.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:36 compute-0 nova_compute[189459]: 2025-12-02 17:20:36.818 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.842 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.845 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.845 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:20:37 compute-0 nova_compute[189459]: 2025-12-02 17:20:37.846 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:20:38 compute-0 nova_compute[189459]: 2025-12-02 17:20:38.658 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:39 compute-0 podman[255452]: 2025-12-02 17:20:39.255351052 +0000 UTC m=+0.084551113 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2)
Dec  2 17:20:39 compute-0 podman[255453]: 2025-12-02 17:20:39.290855317 +0000 UTC m=+0.111287924 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 17:20:40 compute-0 nova_compute[189459]: 2025-12-02 17:20:40.533 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:20:40 compute-0 nova_compute[189459]: 2025-12-02 17:20:40.554 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:20:40 compute-0 nova_compute[189459]: 2025-12-02 17:20:40.555 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:20:40 compute-0 nova_compute[189459]: 2025-12-02 17:20:40.556 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:40 compute-0 nova_compute[189459]: 2025-12-02 17:20:40.557 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:41 compute-0 nova_compute[189459]: 2025-12-02 17:20:41.421 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:41 compute-0 nova_compute[189459]: 2025-12-02 17:20:41.422 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:20:41 compute-0 nova_compute[189459]: 2025-12-02 17:20:41.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:20:41 compute-0 nova_compute[189459]: 2025-12-02 17:20:41.827 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:42 compute-0 podman[255496]: 2025-12-02 17:20:42.270138598 +0000 UTC m=+0.073492118 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:20:42 compute-0 podman[255489]: 2025-12-02 17:20:42.287581903 +0000 UTC m=+0.112864337 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:20:42 compute-0 podman[255490]: 2025-12-02 17:20:42.309800324 +0000 UTC m=+0.117955132 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9)
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.423 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.424 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.425 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.449 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.450 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.451 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.451 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.562 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.657 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.658 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:20:42 compute-0 nova_compute[189459]: 2025-12-02 17:20:42.758 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.174 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.175 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5089MB free_disk=72.09413146972656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.176 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.176 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.396 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.397 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.397 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.476 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.554 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.555 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.566 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.592 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.632 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.644 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.645 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.646 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:20:43 compute-0 nova_compute[189459]: 2025-12-02 17:20:43.659 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:45 compute-0 nova_compute[189459]: 2025-12-02 17:20:45.631 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:45 compute-0 nova_compute[189459]: 2025-12-02 17:20:45.632 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:20:46 compute-0 nova_compute[189459]: 2025-12-02 17:20:46.835 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:48 compute-0 nova_compute[189459]: 2025-12-02 17:20:48.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:48 compute-0 nova_compute[189459]: 2025-12-02 17:20:48.661 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:51 compute-0 podman[255552]: 2025-12-02 17:20:51.245883261 +0000 UTC m=+0.069746968 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:20:51 compute-0 podman[255551]: 2025-12-02 17:20:51.264578699 +0000 UTC m=+0.072431440 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:20:51 compute-0 podman[255550]: 2025-12-02 17:20:51.291125045 +0000 UTC m=+0.123976932 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 17:20:51 compute-0 nova_compute[189459]: 2025-12-02 17:20:51.838 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:53 compute-0 nova_compute[189459]: 2025-12-02 17:20:53.665 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:56 compute-0 nova_compute[189459]: 2025-12-02 17:20:56.844 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:58 compute-0 nova_compute[189459]: 2025-12-02 17:20:58.667 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:20:59 compute-0 nova_compute[189459]: 2025-12-02 17:20:59.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:20:59 compute-0 nova_compute[189459]: 2025-12-02 17:20:59.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:20:59 compute-0 podman[203941]: time="2025-12-02T17:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:20:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:20:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: ERROR   17:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: ERROR   17:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: ERROR   17:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: ERROR   17:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: ERROR   17:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:21:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:21:01 compute-0 nova_compute[189459]: 2025-12-02 17:21:01.847 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:01.891 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:01.892 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:01.893 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.057 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.058 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007d6f1eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.074 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:21:03.074932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.081 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:21:03.084117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.084 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:21:03.086342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.109 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.110 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.111 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:21:03.112464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.139 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 214920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.140 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:21:03.141037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.180 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 28728320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.181 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 663212234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 67026549 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1028 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.183 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:21:03.182196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:21:03.183453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:21:03.184722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:21:03.185859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3584130836 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:21:03.186921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:21:03.188003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:21:03.189110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:21:03.189965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:21:03.191046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:21:03.192077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:21:03.193091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:21:03.194447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:21:03.195589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:21:03.196902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:21:03.198185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.198 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:21:03.199499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.200 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 43.359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:21:03.200804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.201 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:21:03.202326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.202 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:21:03.203730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.204 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:21:03.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:21:03 compute-0 nova_compute[189459]: 2025-12-02 17:21:03.670 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:04 compute-0 podman[255623]: 2025-12-02 17:21:04.300266809 +0000 UTC m=+0.115354493 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc.)
Dec  2 17:21:06 compute-0 nova_compute[189459]: 2025-12-02 17:21:06.851 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:08 compute-0 nova_compute[189459]: 2025-12-02 17:21:08.674 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:10 compute-0 podman[255646]: 2025-12-02 17:21:10.296474553 +0000 UTC m=+0.105494680 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:21:10 compute-0 podman[255645]: 2025-12-02 17:21:10.318652584 +0000 UTC m=+0.133658070 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:21:11 compute-0 nova_compute[189459]: 2025-12-02 17:21:11.856 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:13 compute-0 podman[255689]: 2025-12-02 17:21:13.264945706 +0000 UTC m=+0.078270285 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:21:13 compute-0 podman[255683]: 2025-12-02 17:21:13.277179542 +0000 UTC m=+0.090065589 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:21:13 compute-0 podman[255682]: 2025-12-02 17:21:13.277697896 +0000 UTC m=+0.104474703 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  2 17:21:13 compute-0 nova_compute[189459]: 2025-12-02 17:21:13.676 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:15 compute-0 nova_compute[189459]: 2025-12-02 17:21:15.745 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:15 compute-0 nova_compute[189459]: 2025-12-02 17:21:15.773 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:21:15 compute-0 nova_compute[189459]: 2025-12-02 17:21:15.774 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:15 compute-0 nova_compute[189459]: 2025-12-02 17:21:15.774 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:15 compute-0 nova_compute[189459]: 2025-12-02 17:21:15.802 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:16 compute-0 nova_compute[189459]: 2025-12-02 17:21:16.862 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:18 compute-0 nova_compute[189459]: 2025-12-02 17:21:18.679 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:21 compute-0 nova_compute[189459]: 2025-12-02 17:21:21.870 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:22 compute-0 podman[255739]: 2025-12-02 17:21:22.254444836 +0000 UTC m=+0.069995375 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:21:22 compute-0 podman[255738]: 2025-12-02 17:21:22.275003163 +0000 UTC m=+0.095197236 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:21:22 compute-0 podman[255737]: 2025-12-02 17:21:22.287781293 +0000 UTC m=+0.114233812 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 17:21:23 compute-0 nova_compute[189459]: 2025-12-02 17:21:23.681 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:26 compute-0 nova_compute[189459]: 2025-12-02 17:21:26.874 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:28 compute-0 nova_compute[189459]: 2025-12-02 17:21:28.685 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:29 compute-0 podman[203941]: time="2025-12-02T17:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:21:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:21:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: ERROR   17:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: ERROR   17:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: ERROR   17:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: ERROR   17:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: ERROR   17:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:21:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:21:31 compute-0 nova_compute[189459]: 2025-12-02 17:21:31.439 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:31 compute-0 nova_compute[189459]: 2025-12-02 17:21:31.877 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.849 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.851 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.885 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.979 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.980 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.992 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 17:21:32 compute-0 nova_compute[189459]: 2025-12-02 17:21:32.992 189463 INFO nova.compute.claims [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.120 189463 DEBUG nova.compute.provider_tree [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.134 189463 DEBUG nova.scheduler.client.report [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.159 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.160 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.214 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.215 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.230 189463 INFO nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.259 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.351 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.353 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.354 189463 INFO nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Creating image(s)#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.355 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.356 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.357 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.377 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.469 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.471 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.471 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.483 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.542 189463 DEBUG nova.policy [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd97265454999468fb261510e60c81b0e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.550 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.550 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba,backing_fmt=raw /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.594 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba,backing_fmt=raw /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.596 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "30c7a5bf10b220ad4028f2d500ff77f76aa72dba" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.597 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.672 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/30c7a5bf10b220ad4028f2d500ff77f76aa72dba --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.673 189463 DEBUG nova.virt.disk.api [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Checking if we can resize image /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.674 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.690 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.736 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.738 189463 DEBUG nova.virt.disk.api [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Cannot resize image /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.740 189463 DEBUG nova.objects.instance [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'migration_context' on Instance uuid 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.796 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.797 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Ensure instance console log exists: /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.798 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.798 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:33 compute-0 nova_compute[189459]: 2025-12-02 17:21:33.799 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:34 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:34.120 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:21:34 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:34.125 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:21:34 compute-0 nova_compute[189459]: 2025-12-02 17:21:34.127 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:34 compute-0 nova_compute[189459]: 2025-12-02 17:21:34.494 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Successfully created port: b7169bf1-4de3-40ed-bda2-cdae863fd264 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 17:21:35 compute-0 podman[255822]: 2025-12-02 17:21:35.247275554 +0000 UTC m=+0.076865757 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Dec  2 17:21:35 compute-0 nova_compute[189459]: 2025-12-02 17:21:35.800 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Successfully updated port: b7169bf1-4de3-40ed-bda2-cdae863fd264 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.172 189463 DEBUG nova.compute.manager [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-changed-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.173 189463 DEBUG nova.compute.manager [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Refreshing instance network info cache due to event network-changed-b7169bf1-4de3-40ed-bda2-cdae863fd264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.174 189463 DEBUG oslo_concurrency.lockutils [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.175 189463 DEBUG oslo_concurrency.lockutils [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.175 189463 DEBUG nova.network.neutron [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Refreshing network info cache for port b7169bf1-4de3-40ed-bda2-cdae863fd264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.181 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.491 189463 DEBUG nova.network.neutron [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:21:36 compute-0 nova_compute[189459]: 2025-12-02 17:21:36.886 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:37 compute-0 nova_compute[189459]: 2025-12-02 17:21:37.149 189463 DEBUG nova.network.neutron [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:21:37 compute-0 nova_compute[189459]: 2025-12-02 17:21:37.212 189463 DEBUG oslo_concurrency.lockutils [req-1fedcc4d-c7d1-4ade-892f-4ee73e9b52f6 req-1c1472ed-bf41-4824-809a-f9636a13a4e7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:21:37 compute-0 nova_compute[189459]: 2025-12-02 17:21:37.213 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:21:37 compute-0 nova_compute[189459]: 2025-12-02 17:21:37.214 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 17:21:37 compute-0 nova_compute[189459]: 2025-12-02 17:21:37.540 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.411 189463 DEBUG nova.network.neutron [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.414 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.414 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.415 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.496 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.504 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.505 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance network_info: |[{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.509 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Start _get_guest_xml network_info=[{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:17:05Z,direct_url=<?>,disk_format='qcow2',id=53890fe7-10ca-4d2d-8959-827e6ad0a9a2,min_disk=0,min_ram=0,name='tempest-scenario-img--1502674318',owner='d97265454999468fb261510e60c81b0e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:17:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'size': 0, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.516 189463 WARNING nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.521 189463 DEBUG nova.virt.libvirt.host [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.522 189463 DEBUG nova.virt.libvirt.host [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.526 189463 DEBUG nova.virt.libvirt.host [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.527 189463 DEBUG nova.virt.libvirt.host [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.528 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.528 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T17:12:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8e4a4b21-ee56-489d-aeb9-f21b8412f996',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T17:17:05Z,direct_url=<?>,disk_format='qcow2',id=53890fe7-10ca-4d2d-8959-827e6ad0a9a2,min_disk=0,min_ram=0,name='tempest-scenario-img--1502674318',owner='d97265454999468fb261510e60c81b0e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T17:17:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.529 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.529 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.530 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.530 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.531 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.531 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.532 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.532 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.533 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.533 189463 DEBUG nova.virt.hardware [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.537 189463 DEBUG nova.virt.libvirt.vif [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',id=15,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-gc6gldzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:21:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.538 189463 DEBUG nova.network.os_vif_util [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.539 189463 DEBUG nova.network.os_vif_util [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.540 189463 DEBUG nova.objects.instance [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.557 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] End _get_guest_xml xml=<domain type="kvm">
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <uuid>2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e</uuid>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <name>instance-0000000f</name>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <memory>131072</memory>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <vcpu>1</vcpu>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <metadata>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:name>te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z</nova:name>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:creationTime>2025-12-02 17:21:38</nova:creationTime>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:flavor name="m1.nano">
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:memory>128</nova:memory>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:disk>1</nova:disk>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:swap>0</nova:swap>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:vcpus>1</nova:vcpus>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      </nova:flavor>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:owner>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:user uuid="5673ab6de24147cb96ea139c0ad6cb0e">tempest-PrometheusGabbiTest-603644689-project-member</nova:user>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:project uuid="d97265454999468fb261510e60c81b0e">tempest-PrometheusGabbiTest-603644689</nova:project>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      </nova:owner>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:root type="image" uuid="53890fe7-10ca-4d2d-8959-827e6ad0a9a2"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <nova:ports>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        <nova:port uuid="b7169bf1-4de3-40ed-bda2-cdae863fd264">
Dec  2 17:21:38 compute-0 nova_compute[189459]:          <nova:ip type="fixed" address="10.100.3.205" ipVersion="4"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:        </nova:port>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      </nova:ports>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </nova:instance>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </metadata>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <sysinfo type="smbios">
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <system>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="manufacturer">RDO</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="product">OpenStack Compute</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="serial">2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="uuid">2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <entry name="family">Virtual Machine</entry>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </system>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </sysinfo>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <os>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <boot dev="hd"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <smbios mode="sysinfo"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </os>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <features>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <acpi/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <apic/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <vmcoreinfo/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </features>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <clock offset="utc">
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <timer name="hpet" present="no"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </clock>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <cpu mode="host-model" match="exact">
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </cpu>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  <devices>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <disk type="file" device="disk">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <target dev="vda" bus="virtio"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <disk type="file" device="cdrom">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <driver name="qemu" type="raw" cache="none"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <source file="/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.config"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <target dev="sda" bus="sata"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </disk>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <interface type="ethernet">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <mac address="fa:16:3e:0f:2c:97"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <mtu size="1442"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <target dev="tapb7169bf1-4d"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </interface>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <serial type="pty">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <log file="/var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/console.log" append="off"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </serial>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <video>
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <model type="virtio"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </video>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <input type="tablet" bus="usb"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <rng model="virtio">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <backend model="random">/dev/urandom</backend>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </rng>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <controller type="usb" index="0"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    <memballoon model="virtio">
Dec  2 17:21:38 compute-0 nova_compute[189459]:      <stats period="10"/>
Dec  2 17:21:38 compute-0 nova_compute[189459]:    </memballoon>
Dec  2 17:21:38 compute-0 nova_compute[189459]:  </devices>
Dec  2 17:21:38 compute-0 nova_compute[189459]: </domain>
Dec  2 17:21:38 compute-0 nova_compute[189459]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.567 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Preparing to wait for external event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.568 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.568 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.569 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.570 189463 DEBUG nova.virt.libvirt.vif [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',id=15,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-gc6gldzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T17:21:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.570 189463 DEBUG nova.network.os_vif_util [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.571 189463 DEBUG nova.network.os_vif_util [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.572 189463 DEBUG os_vif [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.573 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.573 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.574 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.577 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.578 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7169bf1-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.578 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7169bf1-4d, col_values=(('external_ids', {'iface-id': 'b7169bf1-4de3-40ed-bda2-cdae863fd264', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:2c:97', 'vm-uuid': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.580 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.582 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:21:38 compute-0 NetworkManager[56503]: <info>  [1764696098.5849] manager: (tapb7169bf1-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.591 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.592 189463 INFO os_vif [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d')#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.692 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.787 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.787 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.787 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] No VIF found with MAC fa:16:3e:0f:2c:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 17:21:38 compute-0 nova_compute[189459]: 2025-12-02 17:21:38.787 189463 INFO nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Using config drive#033[00m
Dec  2 17:21:39 compute-0 nova_compute[189459]: 2025-12-02 17:21:39.540 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:21:39 compute-0 nova_compute[189459]: 2025-12-02 17:21:39.542 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:21:39 compute-0 nova_compute[189459]: 2025-12-02 17:21:39.543 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:21:39 compute-0 nova_compute[189459]: 2025-12-02 17:21:39.544 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.528 189463 INFO nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Creating config drive at /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.config#033[00m
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.537 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu46fbust execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.665 189463 DEBUG oslo_concurrency.processutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu46fbust" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:40 compute-0 kernel: tapb7169bf1-4d: entered promiscuous mode
Dec  2 17:21:40 compute-0 NetworkManager[56503]: <info>  [1764696100.7784] manager: (tapb7169bf1-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.778 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:40 compute-0 ovn_controller[97975]: 2025-12-02T17:21:40Z|00168|binding|INFO|Claiming lport b7169bf1-4de3-40ed-bda2-cdae863fd264 for this chassis.
Dec  2 17:21:40 compute-0 ovn_controller[97975]: 2025-12-02T17:21:40Z|00169|binding|INFO|b7169bf1-4de3-40ed-bda2-cdae863fd264: Claiming fa:16:3e:0f:2c:97 10.100.3.205
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.786 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:2c:97 10.100.3.205'], port_security=['fa:16:3e:0f:2c:97 10.100.3.205'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.205/16', 'neutron:device_id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd97265454999468fb261510e60c81b0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf5ac8bc-8bfc-4f8e-a133-81a949c4ce5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de4d374-0f93-45af-a6f2-2a5ac9c09a1c, chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b7169bf1-4de3-40ed-bda2-cdae863fd264) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.788 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b7169bf1-4de3-40ed-bda2-cdae863fd264 in datapath 82b562d0-fe3d-43c8-b78e-fc2eee29ef70 bound to our chassis#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.790 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82b562d0-fe3d-43c8-b78e-fc2eee29ef70#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.814 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[7e984fdf-e309-4769-a111-4f75c2baf7a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 ovn_controller[97975]: 2025-12-02T17:21:40Z|00170|binding|INFO|Setting lport b7169bf1-4de3-40ed-bda2-cdae863fd264 ovn-installed in OVS
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.817 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:40 compute-0 ovn_controller[97975]: 2025-12-02T17:21:40Z|00171|binding|INFO|Setting lport b7169bf1-4de3-40ed-bda2-cdae863fd264 up in Southbound
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.826 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:40 compute-0 systemd-machined[155878]: New machine qemu-16-instance-0000000f.
Dec  2 17:21:40 compute-0 systemd-udevd[255888]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 17:21:40 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.852 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[41019e94-b50b-4d35-8b27-8d9026e8cde6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.856 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[2126a930-10ef-4109-9962-3717e83ff396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 NetworkManager[56503]: <info>  [1764696100.8623] device (tapb7169bf1-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 17:21:40 compute-0 NetworkManager[56503]: <info>  [1764696100.8661] device (tapb7169bf1-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 17:21:40 compute-0 podman[255854]: 2025-12-02 17:21:40.883540864 +0000 UTC m=+0.130859675 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.892 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[779f84a2-be5b-4165-8d4d-d8b63af27c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 podman[255855]: 2025-12-02 17:21:40.911040177 +0000 UTC m=+0.146979445 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.912 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[e7401581-7a69-4b6a-8289-9fe3db76811a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82b562d0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539436, 'reachable_time': 41222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255906, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.931 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[b26e99ff-22d7-42cc-9a7e-97a7ccc9387f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82b562d0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539452, 'tstamp': 539452}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255912, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap82b562d0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539456, 'tstamp': 539456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255912, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.934 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82b562d0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.936 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:40 compute-0 nova_compute[189459]: 2025-12-02 17:21:40.939 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.949 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82b562d0-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.950 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.950 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82b562d0-f0, col_values=(('external_ids', {'iface-id': '3390bd6d-860e-4bcb-929b-c08f611343b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:40.950 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.515 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764696101.5151038, 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.516 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] VM Started (Lifecycle Event)#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.545 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.552 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764696101.5152178, 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.553 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] VM Paused (Lifecycle Event)#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.584 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.590 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.625 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.770 189463 DEBUG nova.compute.manager [req-623a5c6f-a277-45a8-9fcc-bb3979f7e12d req-170a6391-1941-4e11-87f0-ea0f5568ad05 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.772 189463 DEBUG oslo_concurrency.lockutils [req-623a5c6f-a277-45a8-9fcc-bb3979f7e12d req-170a6391-1941-4e11-87f0-ea0f5568ad05 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.773 189463 DEBUG oslo_concurrency.lockutils [req-623a5c6f-a277-45a8-9fcc-bb3979f7e12d req-170a6391-1941-4e11-87f0-ea0f5568ad05 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.774 189463 DEBUG oslo_concurrency.lockutils [req-623a5c6f-a277-45a8-9fcc-bb3979f7e12d req-170a6391-1941-4e11-87f0-ea0f5568ad05 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.775 189463 DEBUG nova.compute.manager [req-623a5c6f-a277-45a8-9fcc-bb3979f7e12d req-170a6391-1941-4e11-87f0-ea0f5568ad05 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Processing event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.777 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.782 189463 DEBUG nova.virt.driver [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] Emitting event <LifecycleEvent: 1764696101.7817297, 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.782 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] VM Resumed (Lifecycle Event)#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.784 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.789 189463 INFO nova.virt.libvirt.driver [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance spawned successfully.#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.789 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.813 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.822 189463 DEBUG nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.828 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.828 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.829 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.830 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.830 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.831 189463 DEBUG nova.virt.libvirt.driver [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.855 189463 INFO nova.compute.manager [None req-bbf598a5-32ce-438b-ba75-244f7fcd621a - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.885 189463 INFO nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.885 189463 DEBUG nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:21:41 compute-0 nova_compute[189459]: 2025-12-02 17:21:41.981 189463 INFO nova.compute.manager [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Took 9.03 seconds to build instance.#033[00m
Dec  2 17:21:42 compute-0 nova_compute[189459]: 2025-12-02 17:21:42.003 189463 DEBUG oslo_concurrency.lockutils [None req-2c7a2811-c1ac-41c1-b7f6-b97dbb2e9a69 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  2 17:21:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.582 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.692 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.959 189463 DEBUG nova.compute.manager [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.959 189463 DEBUG oslo_concurrency.lockutils [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.959 189463 DEBUG oslo_concurrency.lockutils [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.960 189463 DEBUG oslo_concurrency.lockutils [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.960 189463 DEBUG nova.compute.manager [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] No waiting events found dispatching network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:21:43 compute-0 nova_compute[189459]: 2025-12-02 17:21:43.960 189463 WARNING nova.compute.manager [req-bb48a49c-ad8d-4b78-bc91-a4d6431e6c85 req-df2891f9-2c32-47bc-a571-5293a17546b4 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received unexpected event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 for instance with vm_state active and task_state None.#033[00m
Dec  2 17:21:44 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:21:44.131 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:21:44 compute-0 podman[255940]: 2025-12-02 17:21:44.250392065 +0000 UTC m=+0.076096667 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  2 17:21:44 compute-0 podman[255942]: 2025-12-02 17:21:44.275022231 +0000 UTC m=+0.093377787 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:21:44 compute-0 podman[255941]: 2025-12-02 17:21:44.284213466 +0000 UTC m=+0.105390087 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=)
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.531 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.581 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.582 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.583 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.584 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.586 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.872 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.873 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.873 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.874 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:21:44 compute-0 nova_compute[189459]: 2025-12-02 17:21:44.983 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.047 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.048 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.146 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.157 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.236 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.237 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.307 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.772 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.774 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4975MB free_disk=72.09342956542969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.774 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.775 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.858 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.858 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.858 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.859 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.937 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:21:45 compute-0 nova_compute[189459]: 2025-12-02 17:21:45.957 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:21:46 compute-0 nova_compute[189459]: 2025-12-02 17:21:46.001 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:21:46 compute-0 nova_compute[189459]: 2025-12-02 17:21:46.002 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:21:47 compute-0 nova_compute[189459]: 2025-12-02 17:21:47.826 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:47 compute-0 nova_compute[189459]: 2025-12-02 17:21:47.827 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:47 compute-0 nova_compute[189459]: 2025-12-02 17:21:47.828 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:21:48 compute-0 nova_compute[189459]: 2025-12-02 17:21:48.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:21:48 compute-0 nova_compute[189459]: 2025-12-02 17:21:48.586 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:48 compute-0 nova_compute[189459]: 2025-12-02 17:21:48.694 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:53 compute-0 podman[256011]: 2025-12-02 17:21:53.262187399 +0000 UTC m=+0.075992855 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:21:53 compute-0 podman[256012]: 2025-12-02 17:21:53.277827155 +0000 UTC m=+0.091945019 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:21:53 compute-0 podman[256010]: 2025-12-02 17:21:53.328593037 +0000 UTC m=+0.138390306 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:21:53 compute-0 nova_compute[189459]: 2025-12-02 17:21:53.590 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:53 compute-0 nova_compute[189459]: 2025-12-02 17:21:53.697 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:58 compute-0 nova_compute[189459]: 2025-12-02 17:21:58.594 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:58 compute-0 nova_compute[189459]: 2025-12-02 17:21:58.701 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:21:59 compute-0 podman[203941]: time="2025-12-02T17:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:21:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:21:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: ERROR   17:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: ERROR   17:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: ERROR   17:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: ERROR   17:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: ERROR   17:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:22:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:22:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:22:01.892 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:22:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:22:01.893 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:22:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:22:01.894 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:22:03 compute-0 nova_compute[189459]: 2025-12-02 17:22:03.601 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:03 compute-0 nova_compute[189459]: 2025-12-02 17:22:03.704 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:06 compute-0 podman[256081]: 2025-12-02 17:22:06.278200004 +0000 UTC m=+0.104042661 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 17:22:08 compute-0 nova_compute[189459]: 2025-12-02 17:22:08.604 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:08 compute-0 nova_compute[189459]: 2025-12-02 17:22:08.707 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:10 compute-0 ovn_controller[97975]: 2025-12-02T17:22:10Z|00172|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  2 17:22:11 compute-0 podman[256102]: 2025-12-02 17:22:11.245557944 +0000 UTC m=+0.070001265 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:22:11 compute-0 podman[256101]: 2025-12-02 17:22:11.264790517 +0000 UTC m=+0.090214694 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:22:13 compute-0 nova_compute[189459]: 2025-12-02 17:22:13.609 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:13 compute-0 nova_compute[189459]: 2025-12-02 17:22:13.709 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:14 compute-0 podman[256148]: 2025-12-02 17:22:14.788023082 +0000 UTC m=+0.094031425 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:22:14 compute-0 podman[256149]: 2025-12-02 17:22:14.80970462 +0000 UTC m=+0.101459893 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Dec  2 17:22:14 compute-0 podman[256150]: 2025-12-02 17:22:14.826729373 +0000 UTC m=+0.121049074 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  2 17:22:16 compute-0 ovn_controller[97975]: 2025-12-02T17:22:16Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:2c:97 10.100.3.205
Dec  2 17:22:16 compute-0 ovn_controller[97975]: 2025-12-02T17:22:16Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:2c:97 10.100.3.205
Dec  2 17:22:18 compute-0 nova_compute[189459]: 2025-12-02 17:22:18.613 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:18 compute-0 nova_compute[189459]: 2025-12-02 17:22:18.710 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:23 compute-0 nova_compute[189459]: 2025-12-02 17:22:23.617 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:23 compute-0 nova_compute[189459]: 2025-12-02 17:22:23.714 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:24 compute-0 podman[256208]: 2025-12-02 17:22:24.245191454 +0000 UTC m=+0.068492145 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:22:24 compute-0 podman[256206]: 2025-12-02 17:22:24.28257871 +0000 UTC m=+0.110682129 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:22:24 compute-0 podman[256207]: 2025-12-02 17:22:24.312689111 +0000 UTC m=+0.125232795 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:22:28 compute-0 nova_compute[189459]: 2025-12-02 17:22:28.623 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:28 compute-0 nova_compute[189459]: 2025-12-02 17:22:28.717 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:29 compute-0 podman[203941]: time="2025-12-02T17:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:22:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:22:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Dec  2 17:22:31 compute-0 nova_compute[189459]: 2025-12-02 17:22:31.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: ERROR   17:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: ERROR   17:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: ERROR   17:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: ERROR   17:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: ERROR   17:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:22:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:22:33 compute-0 nova_compute[189459]: 2025-12-02 17:22:33.628 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:33 compute-0 nova_compute[189459]: 2025-12-02 17:22:33.721 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:37 compute-0 podman[256275]: 2025-12-02 17:22:37.257241515 +0000 UTC m=+0.085422436 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  2 17:22:37 compute-0 nova_compute[189459]: 2025-12-02 17:22:37.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:38 compute-0 nova_compute[189459]: 2025-12-02 17:22:38.632 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:38 compute-0 nova_compute[189459]: 2025-12-02 17:22:38.724 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:40 compute-0 nova_compute[189459]: 2025-12-02 17:22:40.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:40 compute-0 nova_compute[189459]: 2025-12-02 17:22:40.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:40 compute-0 nova_compute[189459]: 2025-12-02 17:22:40.431 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:22:40 compute-0 nova_compute[189459]: 2025-12-02 17:22:40.432 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:22:41 compute-0 nova_compute[189459]: 2025-12-02 17:22:41.512 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:22:41 compute-0 nova_compute[189459]: 2025-12-02 17:22:41.514 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:22:41 compute-0 nova_compute[189459]: 2025-12-02 17:22:41.514 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:22:41 compute-0 nova_compute[189459]: 2025-12-02 17:22:41.515 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:22:42 compute-0 podman[256296]: 2025-12-02 17:22:42.249325492 +0000 UTC m=+0.078495022 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  2 17:22:42 compute-0 podman[256297]: 2025-12-02 17:22:42.263198561 +0000 UTC m=+0.083991827 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  2 17:22:43 compute-0 nova_compute[189459]: 2025-12-02 17:22:43.637 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:43 compute-0 nova_compute[189459]: 2025-12-02 17:22:43.727 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:44 compute-0 nova_compute[189459]: 2025-12-02 17:22:44.754 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:22:44 compute-0 nova_compute[189459]: 2025-12-02 17:22:44.902 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:22:44 compute-0 nova_compute[189459]: 2025-12-02 17:22:44.903 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:22:44 compute-0 nova_compute[189459]: 2025-12-02 17:22:44.903 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:44 compute-0 nova_compute[189459]: 2025-12-02 17:22:44.904 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.242 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.243 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.243 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.243 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:22:45 compute-0 podman[256336]: 2025-12-02 17:22:45.26078816 +0000 UTC m=+0.081718777 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler)
Dec  2 17:22:45 compute-0 podman[256335]: 2025-12-02 17:22:45.279215181 +0000 UTC m=+0.107151454 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  2 17:22:45 compute-0 podman[256337]: 2025-12-02 17:22:45.280601598 +0000 UTC m=+0.088636141 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.716 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.787 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.788 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.847 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.854 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.913 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.913 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:22:45 compute-0 nova_compute[189459]: 2025-12-02 17:22:45.973 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.299 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.300 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=72.065185546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.301 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.301 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.524 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.524 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.524 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.524 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.593 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.633 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.634 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:22:46 compute-0 nova_compute[189459]: 2025-12-02 17:22:46.634 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:22:47 compute-0 nova_compute[189459]: 2025-12-02 17:22:47.140 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:47 compute-0 nova_compute[189459]: 2025-12-02 17:22:47.141 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:47 compute-0 nova_compute[189459]: 2025-12-02 17:22:47.141 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:47 compute-0 nova_compute[189459]: 2025-12-02 17:22:47.141 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:22:48 compute-0 nova_compute[189459]: 2025-12-02 17:22:48.641 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:48 compute-0 nova_compute[189459]: 2025-12-02 17:22:48.729 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:50 compute-0 nova_compute[189459]: 2025-12-02 17:22:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:22:53 compute-0 nova_compute[189459]: 2025-12-02 17:22:53.645 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:53 compute-0 nova_compute[189459]: 2025-12-02 17:22:53.732 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:55 compute-0 podman[256405]: 2025-12-02 17:22:55.29466635 +0000 UTC m=+0.110493713 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:22:55 compute-0 podman[256406]: 2025-12-02 17:22:55.298002499 +0000 UTC m=+0.111382247 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:22:55 compute-0 podman[256404]: 2025-12-02 17:22:55.312536826 +0000 UTC m=+0.135272693 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:22:58 compute-0 nova_compute[189459]: 2025-12-02 17:22:58.650 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:58 compute-0 nova_compute[189459]: 2025-12-02 17:22:58.735 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:22:59 compute-0 podman[203941]: time="2025-12-02T17:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:22:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:22:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: ERROR   17:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: ERROR   17:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: ERROR   17:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: ERROR   17:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: ERROR   17:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:23:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:23:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:23:01.894 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:23:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:23:01.897 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:23:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:23:01.898 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.058 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.060 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.068 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.072 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03291e77aa784768971a651118fdf91e05c5b9452a253ec257ec01d0b890c7f4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:23:03 compute-0 nova_compute[189459]: 2025-12-02 17:23:03.655 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.729 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 02 Dec 2025 17:23:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-41611ca6-b65a-4496-9bc5-df74e52b3c45 x-openstack-request-id: req-41611ca6-b65a-4496-9bc5-df74e52b3c45 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.729 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e", "name": "te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z", "status": "ACTIVE", "tenant_id": "d97265454999468fb261510e60c81b0e", "user_id": "5673ab6de24147cb96ea139c0ad6cb0e", "metadata": {"metering.server_group": "bb3de81f-f629-45e4-a58b-8725288b0515"}, "hostId": "24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b", "image": {"id": "53890fe7-10ca-4d2d-8959-827e6ad0a9a2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53890fe7-10ca-4d2d-8959-827e6ad0a9a2"}]}, "flavor": {"id": "8e4a4b21-ee56-489d-aeb9-f21b8412f996", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8e4a4b21-ee56-489d-aeb9-f21b8412f996"}]}, "created": "2025-12-02T17:21:31Z", "updated": "2025-12-02T17:21:41Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.205", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0f:2c:97"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-02T17:21:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.730 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e used request id req-41611ca6-b65a-4496-9bc5-df74e52b3c45 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.731 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'name': 'te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.735 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:23:03.736089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 nova_compute[189459]: 2025-12-02 17:23:03.738 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.744 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e / tapb7169bf1-4d inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.744 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.750 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.751 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.752 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:23:03.751687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.752 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:23:03.753121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.774 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.775 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.795 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.796 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.797 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:23:03.797323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.827 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/cpu volume: 80280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.849 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 332580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:23:03.850417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.890 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.891 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.928 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 29633024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.928 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 458832854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.929 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 46386137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.930 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 681616974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:23:03.929566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.930 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 72946936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.931 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.932 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.932 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.933 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:23:03.931526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:23:03.934552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.935 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.935 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.936 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.937 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.937 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.938 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.938 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.938 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:23:03.937482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.940 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 72835072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:23:03.940682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.941 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.941 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.942 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 4330094733 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:23:03.943226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.943 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.944 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3584130836 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.944 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:23:03.945336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.945 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.946 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:23:03.946761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.947 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.947 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.947 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:23:03.948495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.949 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-02T17:23:03.949804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z>]
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:23:03.950852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:23:03.951905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.952 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:23:03.953235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.954 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:23:03.954530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:23:03.955962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.956 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.956 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:23:03.957514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.958 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 1746 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.958 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.959 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:23:03.959109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.959 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.960 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/memory.usage volume: 43.50390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:23:03.960731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.961 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 43.359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z>]
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-02T17:23:03.962235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes volume: 1472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.963 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:23:03.963337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.964 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.965 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:23:03.964812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:23:03.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:23:08 compute-0 podman[256475]: 2025-12-02 17:23:08.258909306 +0000 UTC m=+0.088327122 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Dec  2 17:23:08 compute-0 nova_compute[189459]: 2025-12-02 17:23:08.660 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:08 compute-0 nova_compute[189459]: 2025-12-02 17:23:08.741 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:13 compute-0 podman[256495]: 2025-12-02 17:23:13.290462905 +0000 UTC m=+0.105066748 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:23:13 compute-0 podman[256494]: 2025-12-02 17:23:13.307048227 +0000 UTC m=+0.140225545 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4)
Dec  2 17:23:13 compute-0 nova_compute[189459]: 2025-12-02 17:23:13.664 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:13 compute-0 nova_compute[189459]: 2025-12-02 17:23:13.744 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:16 compute-0 podman[256531]: 2025-12-02 17:23:16.259310329 +0000 UTC m=+0.078341237 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  2 17:23:16 compute-0 podman[256533]: 2025-12-02 17:23:16.26313182 +0000 UTC m=+0.072199153 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:23:16 compute-0 podman[256532]: 2025-12-02 17:23:16.279883976 +0000 UTC m=+0.097029784 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:23:18 compute-0 nova_compute[189459]: 2025-12-02 17:23:18.667 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:18 compute-0 nova_compute[189459]: 2025-12-02 17:23:18.747 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:23 compute-0 nova_compute[189459]: 2025-12-02 17:23:23.671 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:23 compute-0 nova_compute[189459]: 2025-12-02 17:23:23.750 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:26 compute-0 podman[256599]: 2025-12-02 17:23:26.264699419 +0000 UTC m=+0.082662322 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:23:26 compute-0 podman[256600]: 2025-12-02 17:23:26.304340584 +0000 UTC m=+0.103474566 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:23:26 compute-0 podman[256598]: 2025-12-02 17:23:26.346514697 +0000 UTC m=+0.158491731 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec  2 17:23:28 compute-0 nova_compute[189459]: 2025-12-02 17:23:28.677 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:28 compute-0 nova_compute[189459]: 2025-12-02 17:23:28.755 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:29 compute-0 podman[203941]: time="2025-12-02T17:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:23:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:23:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: ERROR   17:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: ERROR   17:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: ERROR   17:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: ERROR   17:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: ERROR   17:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:23:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:23:33 compute-0 nova_compute[189459]: 2025-12-02 17:23:33.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:33 compute-0 nova_compute[189459]: 2025-12-02 17:23:33.680 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:33 compute-0 nova_compute[189459]: 2025-12-02 17:23:33.757 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:38 compute-0 nova_compute[189459]: 2025-12-02 17:23:38.685 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:38 compute-0 nova_compute[189459]: 2025-12-02 17:23:38.763 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:38 compute-0 podman[256670]: 2025-12-02 17:23:38.81902523 +0000 UTC m=+0.107895924 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:23:39 compute-0 nova_compute[189459]: 2025-12-02 17:23:39.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:41 compute-0 nova_compute[189459]: 2025-12-02 17:23:41.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:41 compute-0 nova_compute[189459]: 2025-12-02 17:23:41.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:23:41 compute-0 nova_compute[189459]: 2025-12-02 17:23:41.656 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:23:41 compute-0 nova_compute[189459]: 2025-12-02 17:23:41.656 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:23:41 compute-0 nova_compute[189459]: 2025-12-02 17:23:41.657 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:23:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 17:23:43 compute-0 nova_compute[189459]: 2025-12-02 17:23:43.693 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:43 compute-0 nova_compute[189459]: 2025-12-02 17:23:43.766 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:44 compute-0 podman[256693]: 2025-12-02 17:23:44.247824737 +0000 UTC m=+0.079735044 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:23:44 compute-0 podman[256692]: 2025-12-02 17:23:44.285474129 +0000 UTC m=+0.120161060 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.696 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.773 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.773 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.774 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.775 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.840 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.840 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.841 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:23:44 compute-0 nova_compute[189459]: 2025-12-02 17:23:44.841 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.032 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.125 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.126 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.220 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.228 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.320 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.321 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.390 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.746 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.747 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4948MB free_disk=72.06526947021484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.748 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.749 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.851 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.852 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.853 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.853 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.920 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.936 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.939 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:23:45 compute-0 nova_compute[189459]: 2025-12-02 17:23:45.940 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:23:47 compute-0 podman[256746]: 2025-12-02 17:23:47.260203739 +0000 UTC m=+0.071612428 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:23:47 compute-0 podman[256740]: 2025-12-02 17:23:47.269759514 +0000 UTC m=+0.087839400 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, vcs-type=git, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64)
Dec  2 17:23:47 compute-0 podman[256739]: 2025-12-02 17:23:47.281450915 +0000 UTC m=+0.111383327 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.577 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.578 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.578 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.578 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.697 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:48 compute-0 nova_compute[189459]: 2025-12-02 17:23:48.767 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:52 compute-0 nova_compute[189459]: 2025-12-02 17:23:52.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:23:53 compute-0 nova_compute[189459]: 2025-12-02 17:23:53.701 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:53 compute-0 nova_compute[189459]: 2025-12-02 17:23:53.770 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:57 compute-0 podman[256795]: 2025-12-02 17:23:57.287821481 +0000 UTC m=+0.092140385 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:23:57 compute-0 podman[256796]: 2025-12-02 17:23:57.316561376 +0000 UTC m=+0.112164848 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:23:57 compute-0 podman[256794]: 2025-12-02 17:23:57.328574806 +0000 UTC m=+0.143150753 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:23:58 compute-0 nova_compute[189459]: 2025-12-02 17:23:58.706 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:58 compute-0 nova_compute[189459]: 2025-12-02 17:23:58.774 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:23:59 compute-0 podman[203941]: time="2025-12-02T17:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:23:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:23:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: ERROR   17:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: ERROR   17:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: ERROR   17:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: ERROR   17:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: ERROR   17:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:24:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:24:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:24:01.896 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:24:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:24:01.899 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:24:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:24:01.900 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:24:03 compute-0 nova_compute[189459]: 2025-12-02 17:24:03.712 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:03 compute-0 nova_compute[189459]: 2025-12-02 17:24:03.775 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:08 compute-0 nova_compute[189459]: 2025-12-02 17:24:08.717 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:08 compute-0 nova_compute[189459]: 2025-12-02 17:24:08.778 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:09 compute-0 podman[256861]: 2025-12-02 17:24:09.264262476 +0000 UTC m=+0.084076110 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Dec  2 17:24:13 compute-0 nova_compute[189459]: 2025-12-02 17:24:13.723 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:13 compute-0 nova_compute[189459]: 2025-12-02 17:24:13.781 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:14 compute-0 podman[256883]: 2025-12-02 17:24:14.765574972 +0000 UTC m=+0.089537415 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:24:14 compute-0 podman[256882]: 2025-12-02 17:24:14.781216119 +0000 UTC m=+0.109710423 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  2 17:24:18 compute-0 podman[256922]: 2025-12-02 17:24:18.260801952 +0000 UTC m=+0.086884435 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:24:18 compute-0 podman[256923]: 2025-12-02 17:24:18.275819762 +0000 UTC m=+0.093742717 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, vcs-type=git, release=1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:24:18 compute-0 podman[256928]: 2025-12-02 17:24:18.300470658 +0000 UTC m=+0.109787894 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  2 17:24:18 compute-0 nova_compute[189459]: 2025-12-02 17:24:18.726 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:18 compute-0 nova_compute[189459]: 2025-12-02 17:24:18.785 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:23 compute-0 nova_compute[189459]: 2025-12-02 17:24:23.729 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:23 compute-0 nova_compute[189459]: 2025-12-02 17:24:23.785 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:28 compute-0 podman[256977]: 2025-12-02 17:24:28.263249844 +0000 UTC m=+0.082292562 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:24:28 compute-0 podman[256978]: 2025-12-02 17:24:28.267687572 +0000 UTC m=+0.087404978 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:24:28 compute-0 podman[256976]: 2025-12-02 17:24:28.328138112 +0000 UTC m=+0.153190590 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 17:24:28 compute-0 nova_compute[189459]: 2025-12-02 17:24:28.732 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:28 compute-0 nova_compute[189459]: 2025-12-02 17:24:28.790 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:29 compute-0 podman[203941]: time="2025-12-02T17:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:24:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:24:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: ERROR   17:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: ERROR   17:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: ERROR   17:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: ERROR   17:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: ERROR   17:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:24:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:24:33 compute-0 nova_compute[189459]: 2025-12-02 17:24:33.736 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:33 compute-0 nova_compute[189459]: 2025-12-02 17:24:33.793 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:34 compute-0 nova_compute[189459]: 2025-12-02 17:24:34.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:38 compute-0 nova_compute[189459]: 2025-12-02 17:24:38.741 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:38 compute-0 nova_compute[189459]: 2025-12-02 17:24:38.794 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:40 compute-0 podman[257043]: 2025-12-02 17:24:40.269146063 +0000 UTC m=+0.096911161 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, config_id=edpm)
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.692 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.693 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.693 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:24:41 compute-0 nova_compute[189459]: 2025-12-02 17:24:41.694 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.705 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.721 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.722 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.722 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.723 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.746 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:43 compute-0 nova_compute[189459]: 2025-12-02 17:24:43.798 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:45 compute-0 podman[257063]: 2025-12-02 17:24:45.288780595 +0000 UTC m=+0.105177592 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:24:45 compute-0 podman[257064]: 2025-12-02 17:24:45.296897461 +0000 UTC m=+0.108954753 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.436 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.462 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.463 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.464 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.465 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.537 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.599 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.600 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.680 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.688 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.758 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.760 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:24:46 compute-0 nova_compute[189459]: 2025-12-02 17:24:46.824 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.378 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.379 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4946MB free_disk=72.06537628173828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.380 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.381 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.473 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.474 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.475 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.475 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.545 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.561 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.565 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:24:47 compute-0 nova_compute[189459]: 2025-12-02 17:24:47.566 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:24:48 compute-0 nova_compute[189459]: 2025-12-02 17:24:48.751 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:48 compute-0 nova_compute[189459]: 2025-12-02 17:24:48.801 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:49 compute-0 podman[257114]: 2025-12-02 17:24:49.258870269 +0000 UTC m=+0.090411539 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:24:49 compute-0 podman[257116]: 2025-12-02 17:24:49.271583137 +0000 UTC m=+0.084285865 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:24:49 compute-0 podman[257115]: 2025-12-02 17:24:49.287707527 +0000 UTC m=+0.105029938 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:24:49 compute-0 nova_compute[189459]: 2025-12-02 17:24:49.541 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:50 compute-0 nova_compute[189459]: 2025-12-02 17:24:50.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:50 compute-0 nova_compute[189459]: 2025-12-02 17:24:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:50 compute-0 nova_compute[189459]: 2025-12-02 17:24:50.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:24:53 compute-0 nova_compute[189459]: 2025-12-02 17:24:53.756 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:53 compute-0 nova_compute[189459]: 2025-12-02 17:24:53.806 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:54 compute-0 nova_compute[189459]: 2025-12-02 17:24:54.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:24:58 compute-0 nova_compute[189459]: 2025-12-02 17:24:58.760 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:58 compute-0 nova_compute[189459]: 2025-12-02 17:24:58.807 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:24:59 compute-0 podman[257170]: 2025-12-02 17:24:59.248513109 +0000 UTC m=+0.072883091 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:24:59 compute-0 podman[257171]: 2025-12-02 17:24:59.292174292 +0000 UTC m=+0.107059302 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:24:59 compute-0 podman[257169]: 2025-12-02 17:24:59.31239832 +0000 UTC m=+0.130858765 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:24:59 compute-0 podman[203941]: time="2025-12-02T17:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:24:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:24:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: ERROR   17:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: ERROR   17:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: ERROR   17:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: ERROR   17:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: ERROR   17:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:25:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:25:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:25:01.898 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:25:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:25:01.898 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:25:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:25:01.899 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.058 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.059 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'name': 'te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.069 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:25:03.070042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.074 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.078 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:25:03.079464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:25:03.080888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.097 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.097 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.111 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:25:03.113014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.136 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/cpu volume: 199480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.159 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 335080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:25:03.161051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.208 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.208 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.259 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 29641216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.259 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.260 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.261 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 458832854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.261 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 46386137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.261 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 683015431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.262 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 72946936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:25:03.261002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.263 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.264 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.264 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:25:03.263332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.265 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.266 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.266 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:25:03.265645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.266 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:25:03.267513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.267 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.269 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.269 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:25:03.269004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.269 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.271 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 4346397115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:25:03.270932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.271 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.271 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3622428283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.271 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.272 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:25:03.272428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.273 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 298 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.274 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.274 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 329 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.274 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:25:03.273791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.276 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes.delta volume: 504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.276 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:25:03.276040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.277 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:25:03.278244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:25:03.279246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.279 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:25:03.280619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:25:03.281975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:25:03.283479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.285 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:25:03.284864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.285 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:25:03.286466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.286 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.288 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/memory.usage volume: 43.50390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.288 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 42.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:25:03.287910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.289 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.290 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.290 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:25:03.289755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:25:03.291333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.291 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:25:03.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:25:03 compute-0 nova_compute[189459]: 2025-12-02 17:25:03.766 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:03 compute-0 nova_compute[189459]: 2025-12-02 17:25:03.808 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:08 compute-0 nova_compute[189459]: 2025-12-02 17:25:08.770 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:08 compute-0 nova_compute[189459]: 2025-12-02 17:25:08.811 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:11 compute-0 podman[257242]: 2025-12-02 17:25:11.279340582 +0000 UTC m=+0.111414768 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6)
Dec  2 17:25:13 compute-0 nova_compute[189459]: 2025-12-02 17:25:13.776 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:13 compute-0 nova_compute[189459]: 2025-12-02 17:25:13.815 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:16 compute-0 podman[257263]: 2025-12-02 17:25:16.26243898 +0000 UTC m=+0.084757017 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  2 17:25:16 compute-0 podman[257264]: 2025-12-02 17:25:16.272766836 +0000 UTC m=+0.084441620 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:25:18 compute-0 nova_compute[189459]: 2025-12-02 17:25:18.781 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:18 compute-0 nova_compute[189459]: 2025-12-02 17:25:18.815 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:20 compute-0 podman[257304]: 2025-12-02 17:25:20.267168047 +0000 UTC m=+0.079380195 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 17:25:20 compute-0 podman[257303]: 2025-12-02 17:25:20.284863718 +0000 UTC m=+0.094217769 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:25:20 compute-0 podman[257302]: 2025-12-02 17:25:20.285921417 +0000 UTC m=+0.111271474 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:25:23 compute-0 nova_compute[189459]: 2025-12-02 17:25:23.785 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:23 compute-0 nova_compute[189459]: 2025-12-02 17:25:23.818 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:28 compute-0 nova_compute[189459]: 2025-12-02 17:25:28.789 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:28 compute-0 nova_compute[189459]: 2025-12-02 17:25:28.821 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:29 compute-0 podman[203941]: time="2025-12-02T17:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:25:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:25:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  2 17:25:30 compute-0 podman[257356]: 2025-12-02 17:25:30.26404445 +0000 UTC m=+0.082530848 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:25:30 compute-0 podman[257357]: 2025-12-02 17:25:30.293669769 +0000 UTC m=+0.093232463 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:25:30 compute-0 podman[257355]: 2025-12-02 17:25:30.341414011 +0000 UTC m=+0.156116559 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: ERROR   17:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: ERROR   17:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: ERROR   17:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: ERROR   17:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: ERROR   17:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:25:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:25:33 compute-0 nova_compute[189459]: 2025-12-02 17:25:33.794 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:33 compute-0 nova_compute[189459]: 2025-12-02 17:25:33.824 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:34 compute-0 nova_compute[189459]: 2025-12-02 17:25:34.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:38 compute-0 nova_compute[189459]: 2025-12-02 17:25:38.799 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:38 compute-0 nova_compute[189459]: 2025-12-02 17:25:38.827 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:42 compute-0 podman[257429]: 2025-12-02 17:25:42.269045206 +0000 UTC m=+0.095229897 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec  2 17:25:42 compute-0 nova_compute[189459]: 2025-12-02 17:25:42.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:42 compute-0 nova_compute[189459]: 2025-12-02 17:25:42.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:25:43 compute-0 nova_compute[189459]: 2025-12-02 17:25:43.712 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:25:43 compute-0 nova_compute[189459]: 2025-12-02 17:25:43.713 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:25:43 compute-0 nova_compute[189459]: 2025-12-02 17:25:43.713 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:25:43 compute-0 nova_compute[189459]: 2025-12-02 17:25:43.804 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:43 compute-0 nova_compute[189459]: 2025-12-02 17:25:43.829 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.869 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.891 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.893 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.895 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.896 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.897 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.926 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.928 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.929 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:25:46 compute-0 nova_compute[189459]: 2025-12-02 17:25:46.930 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.203 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:25:47 compute-0 podman[257447]: 2025-12-02 17:25:47.238392888 +0000 UTC m=+0.067400196 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:25:47 compute-0 podman[257448]: 2025-12-02 17:25:47.253023738 +0000 UTC m=+0.072837081 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.278 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.279 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.347 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.355 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.424 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.425 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.486 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.894 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.896 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4949MB free_disk=72.06537628173828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.897 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:25:47 compute-0 nova_compute[189459]: 2025-12-02 17:25:47.898 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.027 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.027 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.028 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.029 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.112 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.196 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.203 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.226 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.246 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.306 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.326 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.328 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.328 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.329 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.808 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:48 compute-0 nova_compute[189459]: 2025-12-02 17:25:48.830 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:49 compute-0 nova_compute[189459]: 2025-12-02 17:25:49.852 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:50 compute-0 nova_compute[189459]: 2025-12-02 17:25:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:50 compute-0 nova_compute[189459]: 2025-12-02 17:25:50.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:25:51 compute-0 podman[257504]: 2025-12-02 17:25:51.261963195 +0000 UTC m=+0.073038206 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:25:51 compute-0 podman[257500]: 2025-12-02 17:25:51.267058361 +0000 UTC m=+0.082727444 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Dec  2 17:25:51 compute-0 podman[257499]: 2025-12-02 17:25:51.276725138 +0000 UTC m=+0.100823485 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  2 17:25:52 compute-0 nova_compute[189459]: 2025-12-02 17:25:52.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:53 compute-0 nova_compute[189459]: 2025-12-02 17:25:53.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:53 compute-0 nova_compute[189459]: 2025-12-02 17:25:53.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:25:53 compute-0 nova_compute[189459]: 2025-12-02 17:25:53.434 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:25:53 compute-0 nova_compute[189459]: 2025-12-02 17:25:53.812 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:53 compute-0 nova_compute[189459]: 2025-12-02 17:25:53.833 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:56 compute-0 nova_compute[189459]: 2025-12-02 17:25:56.434 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:25:58 compute-0 nova_compute[189459]: 2025-12-02 17:25:58.818 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:58 compute-0 nova_compute[189459]: 2025-12-02 17:25:58.835 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:25:59 compute-0 podman[203941]: time="2025-12-02T17:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:25:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:25:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  2 17:26:01 compute-0 podman[257555]: 2025-12-02 17:26:01.269841873 +0000 UTC m=+0.087749507 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:26:01 compute-0 podman[257554]: 2025-12-02 17:26:01.269823283 +0000 UTC m=+0.089187656 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:26:01 compute-0 podman[257553]: 2025-12-02 17:26:01.340374751 +0000 UTC m=+0.154646339 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: ERROR   17:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: ERROR   17:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: ERROR   17:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: ERROR   17:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: ERROR   17:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:26:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:26:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:26:01.898 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:26:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:26:01.899 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:26:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:26:01.899 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:26:03 compute-0 nova_compute[189459]: 2025-12-02 17:26:03.821 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:03 compute-0 nova_compute[189459]: 2025-12-02 17:26:03.836 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:06 compute-0 nova_compute[189459]: 2025-12-02 17:26:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:06 compute-0 nova_compute[189459]: 2025-12-02 17:26:06.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:26:08 compute-0 nova_compute[189459]: 2025-12-02 17:26:08.824 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:08 compute-0 nova_compute[189459]: 2025-12-02 17:26:08.839 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:13 compute-0 podman[257623]: 2025-12-02 17:26:13.251614789 +0000 UTC m=+0.072826750 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6)
Dec  2 17:26:13 compute-0 nova_compute[189459]: 2025-12-02 17:26:13.828 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:13 compute-0 nova_compute[189459]: 2025-12-02 17:26:13.841 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:18 compute-0 podman[257645]: 2025-12-02 17:26:18.251046822 +0000 UTC m=+0.079252581 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:26:18 compute-0 podman[257644]: 2025-12-02 17:26:18.283783094 +0000 UTC m=+0.113991506 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  2 17:26:18 compute-0 nova_compute[189459]: 2025-12-02 17:26:18.833 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:18 compute-0 nova_compute[189459]: 2025-12-02 17:26:18.844 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:22 compute-0 podman[257684]: 2025-12-02 17:26:22.267156152 +0000 UTC m=+0.087683146 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:26:22 compute-0 podman[257686]: 2025-12-02 17:26:22.280027155 +0000 UTC m=+0.096559002 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:26:22 compute-0 podman[257685]: 2025-12-02 17:26:22.317196075 +0000 UTC m=+0.126869860 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, config_id=edpm, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Dec  2 17:26:23 compute-0 nova_compute[189459]: 2025-12-02 17:26:23.836 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:23 compute-0 nova_compute[189459]: 2025-12-02 17:26:23.846 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:28 compute-0 nova_compute[189459]: 2025-12-02 17:26:28.841 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:28 compute-0 nova_compute[189459]: 2025-12-02 17:26:28.849 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:29 compute-0 podman[203941]: time="2025-12-02T17:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:26:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:26:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: ERROR   17:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: ERROR   17:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: ERROR   17:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: ERROR   17:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: ERROR   17:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:26:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:26:32 compute-0 podman[257743]: 2025-12-02 17:26:32.270858968 +0000 UTC m=+0.074667520 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:26:32 compute-0 podman[257742]: 2025-12-02 17:26:32.275721087 +0000 UTC m=+0.086974087 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:26:32 compute-0 podman[257741]: 2025-12-02 17:26:32.355012588 +0000 UTC m=+0.172012631 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:26:33 compute-0 nova_compute[189459]: 2025-12-02 17:26:33.846 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:33 compute-0 nova_compute[189459]: 2025-12-02 17:26:33.853 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:36 compute-0 nova_compute[189459]: 2025-12-02 17:26:36.434 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:38 compute-0 nova_compute[189459]: 2025-12-02 17:26:38.850 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:38 compute-0 nova_compute[189459]: 2025-12-02 17:26:38.855 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:43 compute-0 nova_compute[189459]: 2025-12-02 17:26:43.855 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:43 compute-0 nova_compute[189459]: 2025-12-02 17:26:43.857 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:44 compute-0 podman[257810]: 2025-12-02 17:26:44.236492555 +0000 UTC m=+0.062607389 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.726 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.726 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.726 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:26:44 compute-0 nova_compute[189459]: 2025-12-02 17:26:44.727 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:26:48 compute-0 nova_compute[189459]: 2025-12-02 17:26:48.859 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:26:48 compute-0 nova_compute[189459]: 2025-12-02 17:26:48.860 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:49 compute-0 podman[257830]: 2025-12-02 17:26:49.243800927 +0000 UTC m=+0.073919839 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  2 17:26:49 compute-0 podman[257831]: 2025-12-02 17:26:49.245132252 +0000 UTC m=+0.076041665 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.782 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.798 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.799 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.799 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.800 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.800 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.801 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.824 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.824 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.825 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.825 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.898 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.965 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:26:49 compute-0 nova_compute[189459]: 2025-12-02 17:26:49.967 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.033 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.040 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.104 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.106 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.162 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.485 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.487 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.06535720825195GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.487 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.488 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.575 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.575 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.576 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.576 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.642 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.656 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.658 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:26:50 compute-0 nova_compute[189459]: 2025-12-02 17:26:50.658 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:26:52 compute-0 nova_compute[189459]: 2025-12-02 17:26:52.268 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:52 compute-0 nova_compute[189459]: 2025-12-02 17:26:52.369 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:52 compute-0 nova_compute[189459]: 2025-12-02 17:26:52.370 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:26:53 compute-0 podman[257880]: 2025-12-02 17:26:53.258072668 +0000 UTC m=+0.082087097 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 17:26:53 compute-0 podman[257882]: 2025-12-02 17:26:53.262567727 +0000 UTC m=+0.078114451 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 17:26:53 compute-0 podman[257881]: 2025-12-02 17:26:53.269576964 +0000 UTC m=+0.095150995 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, version=9.4, release=1214.1726694543, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 17:26:53 compute-0 nova_compute[189459]: 2025-12-02 17:26:53.862 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:26:53 compute-0 nova_compute[189459]: 2025-12-02 17:26:53.863 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:26:54 compute-0 nova_compute[189459]: 2025-12-02 17:26:54.507 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:57 compute-0 nova_compute[189459]: 2025-12-02 17:26:57.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:26:58 compute-0 nova_compute[189459]: 2025-12-02 17:26:58.864 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:26:59 compute-0 podman[203941]: time="2025-12-02T17:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:26:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:26:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: ERROR   17:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: ERROR   17:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: ERROR   17:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: ERROR   17:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: ERROR   17:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:27:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:27:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:27:01.900 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:27:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:27:01.900 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:27:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:27:01.901 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.059 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.059 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'name': 'te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.075 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:27:03.076944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.082 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.087 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.088 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.089 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:27:03.089249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.090 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.091 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:27:03.092036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.112 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.112 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.131 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.132 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.134 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:27:03.134610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.186 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/cpu volume: 319410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.213 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 336450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.215 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:27:03.215841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.264 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.265 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 podman[257939]: 2025-12-02 17:27:03.280787389 +0000 UTC m=+0.082113268 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:27:03 compute-0 podman[257938]: 2025-12-02 17:27:03.287311243 +0000 UTC m=+0.102364597 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.308 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 29641216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.308 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 458832854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 46386137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.309 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 683015431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 72946936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.311 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.311 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.311 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.311 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.312 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.313 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.313 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.313 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.314 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.316 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.316 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.316 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.316 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:27:03.309408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 4346397115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:27:03.310898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:27:03.312872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:27:03.314493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3622428283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:27:03.316044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.319 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.319 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.320 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 298 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.321 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.321 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 329 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.321 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:27:03.317732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:27:03.319645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:27:03.320849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:27:03.322674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:27:03.324070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:27:03.325018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:27:03.326236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:27:03.327213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:27:03.328388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:27:03.329660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.330 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 podman[257937]: 2025-12-02 17:27:03.331979212 +0000 UTC m=+0.147044307 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/memory.usage volume: 43.50390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 42.44140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:27:03.330812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:27:03.331979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:27:03.333229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.334 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:27:03.334478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:27:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.866 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.868 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.869 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.869 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.870 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:27:03 compute-0 nova_compute[189459]: 2025-12-02 17:27:03.871 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:08 compute-0 nova_compute[189459]: 2025-12-02 17:27:08.870 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:13 compute-0 nova_compute[189459]: 2025-12-02 17:27:13.873 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:27:14 compute-0 podman[258004]: 2025-12-02 17:27:14.804733636 +0000 UTC m=+0.105081399 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 17:27:18 compute-0 nova_compute[189459]: 2025-12-02 17:27:18.874 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:20 compute-0 podman[258028]: 2025-12-02 17:27:20.246347169 +0000 UTC m=+0.076540229 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  2 17:27:20 compute-0 podman[258029]: 2025-12-02 17:27:20.284889505 +0000 UTC m=+0.099434659 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:27:23 compute-0 nova_compute[189459]: 2025-12-02 17:27:23.877 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:27:24 compute-0 podman[258067]: 2025-12-02 17:27:24.291696397 +0000 UTC m=+0.112766584 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 17:27:24 compute-0 podman[258072]: 2025-12-02 17:27:24.308270128 +0000 UTC m=+0.105538251 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:27:24 compute-0 podman[258068]: 2025-12-02 17:27:24.335334239 +0000 UTC m=+0.136313071 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm)
Dec  2 17:27:28 compute-0 nova_compute[189459]: 2025-12-02 17:27:28.881 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:29 compute-0 podman[203941]: time="2025-12-02T17:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:27:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:27:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: ERROR   17:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: ERROR   17:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: ERROR   17:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: ERROR   17:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: ERROR   17:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:27:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:27:33 compute-0 nova_compute[189459]: 2025-12-02 17:27:33.883 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:27:34 compute-0 podman[258125]: 2025-12-02 17:27:34.289666294 +0000 UTC m=+0.092834473 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:27:34 compute-0 podman[258124]: 2025-12-02 17:27:34.301677153 +0000 UTC m=+0.115794034 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:27:34 compute-0 podman[258123]: 2025-12-02 17:27:34.373968488 +0000 UTC m=+0.184529414 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 17:27:38 compute-0 nova_compute[189459]: 2025-12-02 17:27:38.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:38 compute-0 nova_compute[189459]: 2025-12-02 17:27:38.886 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:38 compute-0 nova_compute[189459]: 2025-12-02 17:27:38.888 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:43 compute-0 nova_compute[189459]: 2025-12-02 17:27:43.888 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:44 compute-0 nova_compute[189459]: 2025-12-02 17:27:44.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:44 compute-0 nova_compute[189459]: 2025-12-02 17:27:44.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:27:44 compute-0 nova_compute[189459]: 2025-12-02 17:27:44.780 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:27:44 compute-0 nova_compute[189459]: 2025-12-02 17:27:44.781 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:27:44 compute-0 nova_compute[189459]: 2025-12-02 17:27:44.781 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:27:45 compute-0 podman[258197]: 2025-12-02 17:27:45.259532426 +0000 UTC m=+0.088382075 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec  2 17:27:45 compute-0 nova_compute[189459]: 2025-12-02 17:27:45.938 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:27:45 compute-0 nova_compute[189459]: 2025-12-02 17:27:45.954 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:27:45 compute-0 nova_compute[189459]: 2025-12-02 17:27:45.955 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:27:45 compute-0 nova_compute[189459]: 2025-12-02 17:27:45.955 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.445 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.445 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.528 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.634 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.636 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.699 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.709 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.774 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.776 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:27:46 compute-0 nova_compute[189459]: 2025-12-02 17:27:46.864 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.326 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.327 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4954MB free_disk=72.06535720825195GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.328 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.328 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.700 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.701 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.702 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.702 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.791 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.807 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.809 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:27:47 compute-0 nova_compute[189459]: 2025-12-02 17:27:47.810 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:27:48 compute-0 nova_compute[189459]: 2025-12-02 17:27:48.891 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:49 compute-0 nova_compute[189459]: 2025-12-02 17:27:49.810 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:49 compute-0 nova_compute[189459]: 2025-12-02 17:27:49.811 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:51 compute-0 podman[258229]: 2025-12-02 17:27:51.302442969 +0000 UTC m=+0.120131510 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:27:51 compute-0 podman[258230]: 2025-12-02 17:27:51.344948841 +0000 UTC m=+0.147461678 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 17:27:51 compute-0 nova_compute[189459]: 2025-12-02 17:27:51.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:51 compute-0 nova_compute[189459]: 2025-12-02 17:27:51.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:27:53 compute-0 nova_compute[189459]: 2025-12-02 17:27:53.895 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:27:54 compute-0 nova_compute[189459]: 2025-12-02 17:27:54.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:55 compute-0 podman[258270]: 2025-12-02 17:27:55.238424429 +0000 UTC m=+0.057600145 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:27:55 compute-0 podman[258268]: 2025-12-02 17:27:55.245424265 +0000 UTC m=+0.077682540 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:27:55 compute-0 podman[258269]: 2025-12-02 17:27:55.248755094 +0000 UTC m=+0.077364651 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Dec  2 17:27:58 compute-0 nova_compute[189459]: 2025-12-02 17:27:58.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:27:58 compute-0 nova_compute[189459]: 2025-12-02 17:27:58.897 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:27:59 compute-0 podman[203941]: time="2025-12-02T17:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:27:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:27:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: ERROR   17:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: ERROR   17:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: ERROR   17:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: ERROR   17:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: ERROR   17:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:28:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:28:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:28:01.901 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:28:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:28:01.903 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:28:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:28:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:28:03 compute-0 nova_compute[189459]: 2025-12-02 17:28:03.900 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:28:05 compute-0 podman[258323]: 2025-12-02 17:28:05.243778053 +0000 UTC m=+0.068546676 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:28:05 compute-0 podman[258324]: 2025-12-02 17:28:05.262540202 +0000 UTC m=+0.080094673 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:28:05 compute-0 podman[258322]: 2025-12-02 17:28:05.296521067 +0000 UTC m=+0.123843738 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:28:08 compute-0 nova_compute[189459]: 2025-12-02 17:28:08.904 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:28:13 compute-0 nova_compute[189459]: 2025-12-02 17:28:13.908 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:16 compute-0 podman[258393]: 2025-12-02 17:28:16.272966786 +0000 UTC m=+0.089192116 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 17:28:18 compute-0 nova_compute[189459]: 2025-12-02 17:28:18.912 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:18 compute-0 nova_compute[189459]: 2025-12-02 17:28:18.914 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:22 compute-0 podman[258415]: 2025-12-02 17:28:22.285178028 +0000 UTC m=+0.106085296 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  2 17:28:22 compute-0 podman[258416]: 2025-12-02 17:28:22.290715135 +0000 UTC m=+0.104401181 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 17:28:23 compute-0 nova_compute[189459]: 2025-12-02 17:28:23.915 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:28:23 compute-0 nova_compute[189459]: 2025-12-02 17:28:23.919 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:26 compute-0 podman[258457]: 2025-12-02 17:28:26.285656102 +0000 UTC m=+0.094196319 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  2 17:28:26 compute-0 podman[258458]: 2025-12-02 17:28:26.2930653 +0000 UTC m=+0.094896828 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, distribution-scope=public, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec  2 17:28:26 compute-0 podman[258459]: 2025-12-02 17:28:26.308724107 +0000 UTC m=+0.098753631 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  2 17:28:28 compute-0 nova_compute[189459]: 2025-12-02 17:28:28.921 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:29 compute-0 podman[203941]: time="2025-12-02T17:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:28:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:28:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: ERROR   17:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: ERROR   17:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: ERROR   17:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: ERROR   17:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: ERROR   17:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:28:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.924 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.926 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.927 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.928 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.929 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:28:33 compute-0 nova_compute[189459]: 2025-12-02 17:28:33.932 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:36 compute-0 podman[258514]: 2025-12-02 17:28:36.256915558 +0000 UTC m=+0.075579173 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:28:36 compute-0 podman[258513]: 2025-12-02 17:28:36.261424728 +0000 UTC m=+0.082380084 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:28:36 compute-0 podman[258512]: 2025-12-02 17:28:36.32381324 +0000 UTC m=+0.136476595 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:28:38 compute-0 nova_compute[189459]: 2025-12-02 17:28:38.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:38 compute-0 nova_compute[189459]: 2025-12-02 17:28:38.930 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:38 compute-0 nova_compute[189459]: 2025-12-02 17:28:38.934 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:43 compute-0 nova_compute[189459]: 2025-12-02 17:28:43.933 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:43 compute-0 nova_compute[189459]: 2025-12-02 17:28:43.935 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.805 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.806 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.807 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:28:44 compute-0 nova_compute[189459]: 2025-12-02 17:28:44.808 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:28:45 compute-0 nova_compute[189459]: 2025-12-02 17:28:45.988 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:28:46 compute-0 nova_compute[189459]: 2025-12-02 17:28:46.006 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:28:46 compute-0 nova_compute[189459]: 2025-12-02 17:28:46.006 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:28:46 compute-0 nova_compute[189459]: 2025-12-02 17:28:46.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:47 compute-0 podman[258584]: 2025-12-02 17:28:47.259152367 +0000 UTC m=+0.083883405 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.434 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.435 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.436 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.526 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.609 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.610 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.668 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.675 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.740 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.742 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.843 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.935 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:48 compute-0 nova_compute[189459]: 2025-12-02 17:28:48.937 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.240 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.241 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4960MB free_disk=72.06540298461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.242 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.242 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.329 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.329 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.330 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.330 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.377 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.393 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.395 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:28:49 compute-0 nova_compute[189459]: 2025-12-02 17:28:49.395 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:28:51 compute-0 nova_compute[189459]: 2025-12-02 17:28:51.396 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:51 compute-0 nova_compute[189459]: 2025-12-02 17:28:51.398 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:52 compute-0 nova_compute[189459]: 2025-12-02 17:28:52.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:53 compute-0 podman[258622]: 2025-12-02 17:28:53.25723114 +0000 UTC m=+0.084517411 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:28:53 compute-0 podman[258621]: 2025-12-02 17:28:53.282116443 +0000 UTC m=+0.104687709 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:28:53 compute-0 nova_compute[189459]: 2025-12-02 17:28:53.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:53 compute-0 nova_compute[189459]: 2025-12-02 17:28:53.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:28:53 compute-0 nova_compute[189459]: 2025-12-02 17:28:53.938 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:53 compute-0 nova_compute[189459]: 2025-12-02 17:28:53.940 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:54 compute-0 nova_compute[189459]: 2025-12-02 17:28:54.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:57 compute-0 podman[258662]: 2025-12-02 17:28:57.279853624 +0000 UTC m=+0.103651511 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Dec  2 17:28:57 compute-0 podman[258664]: 2025-12-02 17:28:57.286724997 +0000 UTC m=+0.106127027 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 17:28:57 compute-0 podman[258663]: 2025-12-02 17:28:57.325713325 +0000 UTC m=+0.142766802 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, container_name=kepler, version=9.4, config_id=edpm)
Dec  2 17:28:58 compute-0 nova_compute[189459]: 2025-12-02 17:28:58.941 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:28:59 compute-0 nova_compute[189459]: 2025-12-02 17:28:59.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:28:59 compute-0 podman[203941]: time="2025-12-02T17:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:28:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:28:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: ERROR   17:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: ERROR   17:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: ERROR   17:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: ERROR   17:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: ERROR   17:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:29:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:29:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:29:01.902 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:29:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:29:01.902 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:29:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:29:01.903 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.059 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.060 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007ff46120>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'name': 'te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.068 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:29:03.068553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.073 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.078 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.081 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:29:03.080752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.082 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.083 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:29:03.084327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.102 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.103 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.128 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.129 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:29:03.130280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.153 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/cpu volume: 334930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.182 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 337890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:29:03.185606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.228 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.229 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.273 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 29641216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.274 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.275 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 484573788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:29:03.275255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.276 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 52770068 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.276 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 683015431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.276 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 72946936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.277 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:29:03.278030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.279 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 1107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.279 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.280 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.280 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.282 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:29:03.282827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.283 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.284 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.284 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.285 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.286 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:29:03.287514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.288 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.289 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.289 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.290 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.291 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.292 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:29:03.292034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.293 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.293 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.293 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:29:03.294857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.295 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 4413610685 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.295 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.296 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3622428283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.296 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.298 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:29:03.297801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.298 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:29:03.300177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.300 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.301 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.301 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 329 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.302 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.304 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:29:03.303598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.304 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.305 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:29:03.306464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.309 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:29:03.308577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.309 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:29:03.311155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.312 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.314 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:29:03.313279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.314 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:29:03.315890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.316 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.317 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:29:03.318442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.319 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.319 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.320 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.321 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.321 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:29:03.320904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.322 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.323 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.323 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 42.4296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:29:03.323069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.325 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.325 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:29:03.325082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.326 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:29:03.326867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.327 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.327 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:29:03.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:29:03 compute-0 nova_compute[189459]: 2025-12-02 17:29:03.944 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:07 compute-0 podman[258723]: 2025-12-02 17:29:07.305340594 +0000 UTC m=+0.120230924 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:29:07 compute-0 podman[258724]: 2025-12-02 17:29:07.319616454 +0000 UTC m=+0.088591609 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:29:07 compute-0 podman[258722]: 2025-12-02 17:29:07.353490127 +0000 UTC m=+0.126575252 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:29:08 compute-0 nova_compute[189459]: 2025-12-02 17:29:08.946 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:13 compute-0 nova_compute[189459]: 2025-12-02 17:29:13.949 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:13 compute-0 nova_compute[189459]: 2025-12-02 17:29:13.950 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:18 compute-0 podman[258792]: 2025-12-02 17:29:18.289332911 +0000 UTC m=+0.109913399 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter)
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.951 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.952 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.953 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.953 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.953 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:29:18 compute-0 nova_compute[189459]: 2025-12-02 17:29:18.954 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:23 compute-0 nova_compute[189459]: 2025-12-02 17:29:23.954 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:24 compute-0 podman[258812]: 2025-12-02 17:29:24.29810423 +0000 UTC m=+0.110643789 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:29:24 compute-0 podman[258813]: 2025-12-02 17:29:24.324262227 +0000 UTC m=+0.125854014 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:29:28 compute-0 podman[258851]: 2025-12-02 17:29:28.333263348 +0000 UTC m=+0.150708516 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec  2 17:29:28 compute-0 podman[258852]: 2025-12-02 17:29:28.354560246 +0000 UTC m=+0.160596520 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, config_id=edpm, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64)
Dec  2 17:29:28 compute-0 podman[258853]: 2025-12-02 17:29:28.35808525 +0000 UTC m=+0.158613617 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.958 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.960 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.960 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.960 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.961 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:29:28 compute-0 nova_compute[189459]: 2025-12-02 17:29:28.962 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:29 compute-0 podman[203941]: time="2025-12-02T17:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:29:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:29:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: ERROR   17:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: ERROR   17:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: ERROR   17:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: ERROR   17:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: ERROR   17:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:29:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:29:33 compute-0 nova_compute[189459]: 2025-12-02 17:29:33.960 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:33 compute-0 nova_compute[189459]: 2025-12-02 17:29:33.963 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:38 compute-0 podman[258903]: 2025-12-02 17:29:38.288678551 +0000 UTC m=+0.097644423 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:29:38 compute-0 podman[258904]: 2025-12-02 17:29:38.290230902 +0000 UTC m=+0.099340857 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:29:38 compute-0 podman[258902]: 2025-12-02 17:29:38.331782789 +0000 UTC m=+0.144588983 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 17:29:38 compute-0 nova_compute[189459]: 2025-12-02 17:29:38.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:38 compute-0 nova_compute[189459]: 2025-12-02 17:29:38.963 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:43 compute-0 nova_compute[189459]: 2025-12-02 17:29:43.967 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:45 compute-0 nova_compute[189459]: 2025-12-02 17:29:45.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:45 compute-0 nova_compute[189459]: 2025-12-02 17:29:45.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:29:45 compute-0 nova_compute[189459]: 2025-12-02 17:29:45.809 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:29:45 compute-0 nova_compute[189459]: 2025-12-02 17:29:45.809 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:29:45 compute-0 nova_compute[189459]: 2025-12-02 17:29:45.810 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:29:46 compute-0 nova_compute[189459]: 2025-12-02 17:29:46.991 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [{"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:29:47 compute-0 nova_compute[189459]: 2025-12-02 17:29:47.010 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:29:47 compute-0 nova_compute[189459]: 2025-12-02 17:29:47.011 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:29:47 compute-0 nova_compute[189459]: 2025-12-02 17:29:47.012 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:48 compute-0 nova_compute[189459]: 2025-12-02 17:29:48.968 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:29:49 compute-0 podman[258974]: 2025-12-02 17:29:49.280714831 +0000 UTC m=+0.099428490 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git)
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.500 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.501 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.501 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.501 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.798 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.889 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.890 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.952 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:29:50 compute-0 nova_compute[189459]: 2025-12-02 17:29:50.958 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.025 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.026 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.095 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.514 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.516 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4960MB free_disk=72.06540298461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.516 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.516 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.594 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.595 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.595 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.596 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.664 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.679 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.681 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:29:51 compute-0 nova_compute[189459]: 2025-12-02 17:29:51.681 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:29:52 compute-0 nova_compute[189459]: 2025-12-02 17:29:52.682 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:52 compute-0 nova_compute[189459]: 2025-12-02 17:29:52.683 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:53 compute-0 nova_compute[189459]: 2025-12-02 17:29:53.971 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:55 compute-0 podman[259008]: 2025-12-02 17:29:55.258523837 +0000 UTC m=+0.074640840 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:29:55 compute-0 podman[259009]: 2025-12-02 17:29:55.26800613 +0000 UTC m=+0.092099395 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  2 17:29:55 compute-0 nova_compute[189459]: 2025-12-02 17:29:55.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:55 compute-0 nova_compute[189459]: 2025-12-02 17:29:55.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:29:56 compute-0 nova_compute[189459]: 2025-12-02 17:29:56.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:29:58 compute-0 nova_compute[189459]: 2025-12-02 17:29:58.974 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:29:59 compute-0 podman[259048]: 2025-12-02 17:29:59.268460653 +0000 UTC m=+0.093021439 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:29:59 compute-0 podman[259049]: 2025-12-02 17:29:59.274939676 +0000 UTC m=+0.092212448 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=)
Dec  2 17:29:59 compute-0 podman[259050]: 2025-12-02 17:29:59.306152808 +0000 UTC m=+0.109278413 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:29:59 compute-0 podman[203941]: time="2025-12-02T17:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:29:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:29:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec  2 17:30:00 compute-0 nova_compute[189459]: 2025-12-02 17:30:00.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: ERROR   17:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: ERROR   17:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: ERROR   17:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: ERROR   17:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: ERROR   17:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:30:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:30:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:30:01.903 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:30:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:30:01.903 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:30:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:30:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:30:03 compute-0 nova_compute[189459]: 2025-12-02 17:30:03.976 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.979 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.980 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.981 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.981 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.982 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 17:30:08 compute-0 nova_compute[189459]: 2025-12-02 17:30:08.983 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:09 compute-0 podman[259108]: 2025-12-02 17:30:09.267672973 +0000 UTC m=+0.071324892 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:30:09 compute-0 podman[259107]: 2025-12-02 17:30:09.278669246 +0000 UTC m=+0.090788140 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:30:09 compute-0 podman[259106]: 2025-12-02 17:30:09.311536041 +0000 UTC m=+0.127657732 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:30:13 compute-0 nova_compute[189459]: 2025-12-02 17:30:13.983 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:18 compute-0 nova_compute[189459]: 2025-12-02 17:30:18.986 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:20 compute-0 podman[259178]: 2025-12-02 17:30:20.261149907 +0000 UTC m=+0.090356599 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, version=9.6, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  2 17:30:23 compute-0 nova_compute[189459]: 2025-12-02 17:30:23.988 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:26 compute-0 podman[259200]: 2025-12-02 17:30:26.270633315 +0000 UTC m=+0.092044654 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:30:26 compute-0 podman[259199]: 2025-12-02 17:30:26.274661962 +0000 UTC m=+0.095469965 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 17:30:28 compute-0 nova_compute[189459]: 2025-12-02 17:30:28.991 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:29 compute-0 podman[203941]: time="2025-12-02T17:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:30:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:30:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Dec  2 17:30:30 compute-0 podman[259240]: 2025-12-02 17:30:30.274181139 +0000 UTC m=+0.091376755 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:30:30 compute-0 podman[259241]: 2025-12-02 17:30:30.276312575 +0000 UTC m=+0.088988491 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 17:30:30 compute-0 podman[259242]: 2025-12-02 17:30:30.290516714 +0000 UTC m=+0.097866658 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: ERROR   17:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: ERROR   17:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: ERROR   17:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: ERROR   17:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: ERROR   17:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:30:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:30:33 compute-0 nova_compute[189459]: 2025-12-02 17:30:33.993 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:38 compute-0 nova_compute[189459]: 2025-12-02 17:30:38.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:38 compute-0 nova_compute[189459]: 2025-12-02 17:30:38.995 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:38 compute-0 nova_compute[189459]: 2025-12-02 17:30:38.996 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:40 compute-0 podman[259298]: 2025-12-02 17:30:40.277920201 +0000 UTC m=+0.092832815 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:30:40 compute-0 podman[259299]: 2025-12-02 17:30:40.295006876 +0000 UTC m=+0.108563454 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:30:40 compute-0 podman[259297]: 2025-12-02 17:30:40.312201874 +0000 UTC m=+0.135120871 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:30:43 compute-0 nova_compute[189459]: 2025-12-02 17:30:43.997 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:46 compute-0 nova_compute[189459]: 2025-12-02 17:30:46.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.955 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.956 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquired lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.956 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 17:30:47 compute-0 nova_compute[189459]: 2025-12-02 17:30:47.957 189463 DEBUG nova.objects.instance [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:30:49 compute-0 nova_compute[189459]: 2025-12-02 17:30:49.000 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:30:49 compute-0 nova_compute[189459]: 2025-12-02 17:30:49.396 189463 DEBUG nova.network.neutron [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [{"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:30:49 compute-0 nova_compute[189459]: 2025-12-02 17:30:49.421 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Releasing lock "refresh_cache-3a077761-3f4d-47af-aea2-9c3255ed7868" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 17:30:49 compute-0 nova_compute[189459]: 2025-12-02 17:30:49.422 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.651 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.652 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.652 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.653 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.737 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.813 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.814 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.875 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.884 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.949 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:30:50 compute-0 nova_compute[189459]: 2025-12-02 17:30:50.950 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.005 189463 DEBUG oslo_concurrency.processutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:30:51 compute-0 podman[259378]: 2025-12-02 17:30:51.29959468 +0000 UTC m=+0.127933890 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.395 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.396 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=72.06540298461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.396 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.397 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 3a077761-3f4d-47af-aea2-9c3255ed7868 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.577 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.578 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.655 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.747 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.748 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.762 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.784 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.871 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.887 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.890 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:30:51 compute-0 nova_compute[189459]: 2025-12-02 17:30:51.890 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:30:52 compute-0 nova_compute[189459]: 2025-12-02 17:30:52.892 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:54 compute-0 nova_compute[189459]: 2025-12-02 17:30:54.004 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:54 compute-0 nova_compute[189459]: 2025-12-02 17:30:54.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:55 compute-0 nova_compute[189459]: 2025-12-02 17:30:55.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:56 compute-0 nova_compute[189459]: 2025-12-02 17:30:56.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:56 compute-0 nova_compute[189459]: 2025-12-02 17:30:56.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:30:56 compute-0 nova_compute[189459]: 2025-12-02 17:30:56.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:57 compute-0 podman[259399]: 2025-12-02 17:30:57.261594341 +0000 UTC m=+0.090409730 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:30:57 compute-0 podman[259400]: 2025-12-02 17:30:57.265111565 +0000 UTC m=+0.091798837 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  2 17:30:57 compute-0 nova_compute[189459]: 2025-12-02 17:30:57.548 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:57 compute-0 nova_compute[189459]: 2025-12-02 17:30:57.548 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:30:57 compute-0 nova_compute[189459]: 2025-12-02 17:30:57.569 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:30:58 compute-0 nova_compute[189459]: 2025-12-02 17:30:58.427 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:30:59 compute-0 nova_compute[189459]: 2025-12-02 17:30:59.006 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:30:59 compute-0 podman[203941]: time="2025-12-02T17:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:30:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:30:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  2 17:31:01 compute-0 podman[259440]: 2025-12-02 17:31:01.262012422 +0000 UTC m=+0.082950891 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  2 17:31:01 compute-0 podman[259438]: 2025-12-02 17:31:01.290551623 +0000 UTC m=+0.118713974 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:31:01 compute-0 podman[259439]: 2025-12-02 17:31:01.300440586 +0000 UTC m=+0.121579180 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: ERROR   17:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: ERROR   17:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: ERROR   17:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: ERROR   17:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: ERROR   17:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:31:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:31:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:01.904 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:01.905 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:02 compute-0 nova_compute[189459]: 2025-12-02 17:31:02.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.060 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.060 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.070 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'name': 'te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.078 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'name': 'te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3', 'flavor': {'id': '8e4a4b21-ee56-489d-aeb9-f21b8412f996', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '53890fe7-10ca-4d2d-8959-827e6ad0a9a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd97265454999468fb261510e60c81b0e', 'user_id': '5673ab6de24147cb96ea139c0ad6cb0e', 'hostId': '24fb4b6da4a0eddab67a65c4cbf779891047ae1df55719db3d2a354b', 'status': 'active', 'metadata': {'metering.server_group': 'bb3de81f-f629-45e4-a58b-8725288b0515'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-02T17:31:03.080035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.086 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.094 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.096 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.096 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-02T17:31:03.096106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-02T17:31:03.098729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.119 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.120 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.140 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-02T17:31:03.141938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.176 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/cpu volume: 336440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.218 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/cpu volume: 339290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.219 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.219 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.219 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-02T17:31:03.219528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.274 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 30591488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.277 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.333 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 29641216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.334 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.336 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 484573788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.336 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.latency volume: 52770068 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.337 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 683015431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.337 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.latency volume: 72946936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.339 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 1107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.340 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-02T17:31:03.335753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.340 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 1069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.340 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-02T17:31:03.339027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.343 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.344 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.344 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.345 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-02T17:31:03.342737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.347 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.348 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.348 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.349 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.351 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.351 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.352 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.352 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.353 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.354 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 4413610685 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.355 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.355 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 3622428283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.356 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.357 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-02T17:31:03.347311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-02T17:31:03.351015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-02T17:31:03.354485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.358 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-02T17:31:03.358160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-02T17:31:03.360342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.360 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.361 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.361 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 329 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.361 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.362 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.363 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.364 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-02T17:31:03.363454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.367 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-02T17:31:03.366014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-02T17:31:03.367346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.370 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-02T17:31:03.369256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-02T17:31:03.371138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.371 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.372 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.373 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.373 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-02T17:31:03.373220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.375 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-02T17:31:03.375619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.376 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.378 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.378 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.379 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.380 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.380 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/memory.usage volume: 42.42578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.381 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-02T17:31:03.377913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-02T17:31:03.380283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.382 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.383 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-02T17:31:03.382747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.385 14 DEBUG ceilometer.compute.pollsters [-] 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.385 14 DEBUG ceilometer.compute.pollsters [-] 3a077761-3f4d-47af-aea2-9c3255ed7868/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.386 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.387 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.387 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.387 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-02T17:31:03.384888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.388 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.389 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:31:03.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:31:04 compute-0 nova_compute[189459]: 2025-12-02 17:31:04.009 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:04 compute-0 nova_compute[189459]: 2025-12-02 17:31:04.010 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:09 compute-0 nova_compute[189459]: 2025-12-02 17:31:09.012 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:11 compute-0 podman[259498]: 2025-12-02 17:31:11.268440562 +0000 UTC m=+0.083706081 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:31:11 compute-0 podman[259497]: 2025-12-02 17:31:11.30326386 +0000 UTC m=+0.119364871 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:31:11 compute-0 podman[259496]: 2025-12-02 17:31:11.352652476 +0000 UTC m=+0.176787791 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:31:14 compute-0 nova_compute[189459]: 2025-12-02 17:31:14.016 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:15 compute-0 nova_compute[189459]: 2025-12-02 17:31:15.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:15 compute-0 nova_compute[189459]: 2025-12-02 17:31:15.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:31:19 compute-0 nova_compute[189459]: 2025-12-02 17:31:19.017 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:19 compute-0 nova_compute[189459]: 2025-12-02 17:31:19.020 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:22 compute-0 podman[259568]: 2025-12-02 17:31:22.2565449 +0000 UTC m=+0.076248743 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:31:24 compute-0 nova_compute[189459]: 2025-12-02 17:31:24.019 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:24 compute-0 nova_compute[189459]: 2025-12-02 17:31:24.022 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:28 compute-0 podman[259591]: 2025-12-02 17:31:28.270255232 +0000 UTC m=+0.089943697 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 17:31:28 compute-0 podman[259592]: 2025-12-02 17:31:28.274780703 +0000 UTC m=+0.092127566 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.022 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:29 compute-0 podman[203941]: time="2025-12-02T17:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:31:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  2 17:31:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.988 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.989 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.990 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.990 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.991 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.992 189463 INFO nova.compute.manager [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Terminating instance#033[00m
Dec  2 17:31:29 compute-0 nova_compute[189459]: 2025-12-02 17:31:29.994 189463 DEBUG nova.compute.manager [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:31:30 compute-0 kernel: tap68e04713-a4 (unregistering): left promiscuous mode
Dec  2 17:31:30 compute-0 NetworkManager[56503]: <info>  [1764696690.0457] device (tap68e04713-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.060 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 ovn_controller[97975]: 2025-12-02T17:31:30Z|00173|binding|INFO|Releasing lport 68e04713-a4f3-481c-ba86-5b87fe8b2358 from this chassis (sb_readonly=0)
Dec  2 17:31:30 compute-0 ovn_controller[97975]: 2025-12-02T17:31:30Z|00174|binding|INFO|Setting lport 68e04713-a4f3-481c-ba86-5b87fe8b2358 down in Southbound
Dec  2 17:31:30 compute-0 ovn_controller[97975]: 2025-12-02T17:31:30Z|00175|binding|INFO|Removing iface tap68e04713-a4 ovn-installed in OVS
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.072 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.076 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:75:a2 10.100.3.185'], port_security=['fa:16:3e:66:75:a2 10.100.3.185'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.185/16', 'neutron:device_id': '3a077761-3f4d-47af-aea2-9c3255ed7868', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd97265454999468fb261510e60c81b0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf5ac8bc-8bfc-4f8e-a133-81a949c4ce5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de4d374-0f93-45af-a6f2-2a5ac9c09a1c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=68e04713-a4f3-481c-ba86-5b87fe8b2358) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.077 106835 INFO neutron.agent.ovn.metadata.agent [-] Port 68e04713-a4f3-481c-ba86-5b87fe8b2358 in datapath 82b562d0-fe3d-43c8-b78e-fc2eee29ef70 unbound from our chassis#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.079 106835 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82b562d0-fe3d-43c8-b78e-fc2eee29ef70#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.101 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[73a82075-bf7f-4c75-9ed1-86b48d9320d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.105 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  2 17:31:30 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 7min 16.762s CPU time.
Dec  2 17:31:30 compute-0 systemd-machined[155878]: Machine qemu-14-instance-0000000d terminated.
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.152 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[74473220-c563-46c2-82e3-dbdd603f105d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.157 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[b93682f3-03a1-4ab3-8280-1e96b4b03655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.193 240024 DEBUG oslo.privsep.daemon [-] privsep: reply[da04cc99-1694-4786-a96c-d2a52b59820c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.214 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[d08d0fb7-7c8b-4b8e-90f0-dae18265aa91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82b562d0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539436, 'reachable_time': 26659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259645, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.223 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.229 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.238 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[5404cdf4-8450-44a2-83a6-88940bf5a690]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82b562d0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539452, 'tstamp': 539452}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259649, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap82b562d0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539456, 'tstamp': 539456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259649, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.240 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82b562d0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.242 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.249 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.249 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82b562d0-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.250 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.250 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82b562d0-f0, col_values=(('external_ids', {'iface-id': '3390bd6d-860e-4bcb-929b-c08f611343b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.251 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.279 189463 INFO nova.virt.libvirt.driver [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Instance destroyed successfully.#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.280 189463 DEBUG nova.objects.instance [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'resources' on Instance uuid 3a077761-3f4d-47af-aea2-9c3255ed7868 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.300 189463 DEBUG nova.virt.libvirt.vif [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-wz6kbtoyiooy-6tjv6x5gjrz3',id=13,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-l33qblw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:17:26Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=3a077761-3f4d-47af-aea2-9c3255ed7868,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.301 189463 DEBUG nova.network.os_vif_util [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "address": "fa:16:3e:66:75:a2", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.185", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68e04713-a4", "ovs_interfaceid": "68e04713-a4f3-481c-ba86-5b87fe8b2358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.303 189463 DEBUG nova.network.os_vif_util [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.303 189463 DEBUG os_vif [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.305 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.305 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68e04713-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.308 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.310 189463 DEBUG nova.compute.manager [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-unplugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.310 189463 DEBUG oslo_concurrency.lockutils [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.311 189463 DEBUG oslo_concurrency.lockutils [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.311 189463 DEBUG oslo_concurrency.lockutils [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.311 189463 DEBUG nova.compute.manager [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] No waiting events found dispatching network-vif-unplugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.311 189463 DEBUG nova.compute.manager [req-13c0f420-d0d2-410c-84ba-d526b292db52 req-0e674007-e987-45a6-9369-95cb9a1f216c b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-unplugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.312 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.315 189463 INFO os_vif [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:75:a2,bridge_name='br-int',has_traffic_filtering=True,id=68e04713-a4f3-481c-ba86-5b87fe8b2358,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68e04713-a4')#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.316 189463 INFO nova.virt.libvirt.driver [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Deleting instance files /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868_del#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.317 189463 INFO nova.virt.libvirt.driver [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Deletion of /var/lib/nova/instances/3a077761-3f4d-47af-aea2-9c3255ed7868_del complete#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.396 189463 INFO nova.compute.manager [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.397 189463 DEBUG oslo.service.loopingcall [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.397 189463 DEBUG nova.compute.manager [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.397 189463 DEBUG nova.network.neutron [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.421 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:31:30 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:30.422 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:31:30 compute-0 nova_compute[189459]: 2025-12-02 17:31:30.425 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: ERROR   17:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: ERROR   17:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: ERROR   17:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: ERROR   17:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: ERROR   17:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:31:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:31:31 compute-0 nova_compute[189459]: 2025-12-02 17:31:31.991 189463 DEBUG nova.network.neutron [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.011 189463 INFO nova.compute.manager [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Took 1.61 seconds to deallocate network for instance.#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.066 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.067 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.161 189463 DEBUG nova.compute.provider_tree [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.178 189463 DEBUG nova.scheduler.client.report [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.199 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.225 189463 INFO nova.scheduler.client.report [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Deleted allocations for instance 3a077761-3f4d-47af-aea2-9c3255ed7868#033[00m
Dec  2 17:31:32 compute-0 podman[259666]: 2025-12-02 17:31:32.261310444 +0000 UTC m=+0.074730522 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 17:31:32 compute-0 podman[259665]: 2025-12-02 17:31:32.269195914 +0000 UTC m=+0.094724774 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.287 189463 DEBUG oslo_concurrency.lockutils [None req-24d02fb5-a3fa-4ec6-ac1c-15457e4fc464 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:32 compute-0 podman[259664]: 2025-12-02 17:31:32.299406509 +0000 UTC m=+0.118966250 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.398 189463 DEBUG nova.compute.manager [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.398 189463 DEBUG oslo_concurrency.lockutils [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.398 189463 DEBUG oslo_concurrency.lockutils [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.398 189463 DEBUG oslo_concurrency.lockutils [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "3a077761-3f4d-47af-aea2-9c3255ed7868-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.399 189463 DEBUG nova.compute.manager [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] No waiting events found dispatching network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.399 189463 WARNING nova.compute.manager [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received unexpected event network-vif-plugged-68e04713-a4f3-481c-ba86-5b87fe8b2358 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:31:32 compute-0 nova_compute[189459]: 2025-12-02 17:31:32.399 189463 DEBUG nova.compute.manager [req-b4667296-beef-437b-b295-25cde4f312d5 req-8c96ab06-3eff-4d18-b1fd-63014a8e534a b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Received event network-vif-deleted-68e04713-a4f3-481c-ba86-5b87fe8b2358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:34 compute-0 nova_compute[189459]: 2025-12-02 17:31:34.025 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:35 compute-0 nova_compute[189459]: 2025-12-02 17:31:35.308 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:37 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:37.425 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.027 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.422 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.736 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.737 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.738 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.738 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.738 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.740 189463 INFO nova.compute.manager [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Terminating instance#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.741 189463 DEBUG nova.compute.manager [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 17:31:39 compute-0 kernel: tapb7169bf1-4d (unregistering): left promiscuous mode
Dec  2 17:31:39 compute-0 NetworkManager[56503]: <info>  [1764696699.7837] device (tapb7169bf1-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 17:31:39 compute-0 ovn_controller[97975]: 2025-12-02T17:31:39Z|00176|binding|INFO|Releasing lport b7169bf1-4de3-40ed-bda2-cdae863fd264 from this chassis (sb_readonly=0)
Dec  2 17:31:39 compute-0 ovn_controller[97975]: 2025-12-02T17:31:39Z|00177|binding|INFO|Setting lport b7169bf1-4de3-40ed-bda2-cdae863fd264 down in Southbound
Dec  2 17:31:39 compute-0 ovn_controller[97975]: 2025-12-02T17:31:39Z|00178|binding|INFO|Removing iface tapb7169bf1-4d ovn-installed in OVS
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.798 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:39.806 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:2c:97 10.100.3.205'], port_security=['fa:16:3e:0f:2c:97 10.100.3.205'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.205/16', 'neutron:device_id': '2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd97265454999468fb261510e60c81b0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf5ac8bc-8bfc-4f8e-a133-81a949c4ce5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de4d374-0f93-45af-a6f2-2a5ac9c09a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>], logical_port=b7169bf1-4de3-40ed-bda2-cdae863fd264) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd566bf6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:31:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:39.808 106835 INFO neutron.agent.ovn.metadata.agent [-] Port b7169bf1-4de3-40ed-bda2-cdae863fd264 in datapath 82b562d0-fe3d-43c8-b78e-fc2eee29ef70 unbound from our chassis#033[00m
Dec  2 17:31:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:39.809 106835 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82b562d0-fe3d-43c8-b78e-fc2eee29ef70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 17:31:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:39.814 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[33bdb3b3-b8f1-44ac-b21e-0a567b173bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:39 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:39.815 106835 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70 namespace which is not needed anymore#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.825 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  2 17:31:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 42.789s CPU time.
Dec  2 17:31:39 compute-0 systemd-machined[155878]: Machine qemu-16-instance-0000000f terminated.
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.975 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:39 compute-0 nova_compute[189459]: 2025-12-02 17:31:39.984 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.036 189463 INFO nova.virt.libvirt.driver [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Instance destroyed successfully.#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.037 189463 DEBUG nova.objects.instance [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lazy-loading 'resources' on Instance uuid 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.051 189463 DEBUG nova.virt.libvirt.vif [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T17:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9550909-asg-hxzogcjdipbx-mfoo5z34q6nf-pf67q7rels3z',id=15,image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T17:21:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bb3de81f-f629-45e4-a58b-8725288b0515'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d97265454999468fb261510e60c81b0e',ramdisk_id='',reservation_id='r-gc6gldzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='53890fe7-10ca-4d2d-8959-827e6ad0a9a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-603644689',owner_user_name='tempest-PrometheusGabbiTest-603644689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T17:21:41Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5673ab6de24147cb96ea139c0ad6cb0e',uuid=2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.053 189463 DEBUG nova.network.os_vif_util [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converting VIF {"id": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "address": "fa:16:3e:0f:2c:97", "network": {"id": "82b562d0-fe3d-43c8-b78e-fc2eee29ef70", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d97265454999468fb261510e60c81b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7169bf1-4d", "ovs_interfaceid": "b7169bf1-4de3-40ed-bda2-cdae863fd264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.054 189463 DEBUG nova.network.os_vif_util [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.055 189463 DEBUG os_vif [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.057 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.058 189463 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7169bf1-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [NOTICE]   (254048) : haproxy version is 2.8.14-c23fe91
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [NOTICE]   (254048) : path to executable is /usr/sbin/haproxy
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [WARNING]  (254048) : Exiting Master process...
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [WARNING]  (254048) : Exiting Master process...
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [ALERT]    (254048) : Current worker (254050) exited with code 143 (Terminated)
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.062 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:40 compute-0 neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70[254044]: [WARNING]  (254048) : All workers exited. Exiting... (0)
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.065 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 17:31:40 compute-0 systemd[1]: libpod-ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42.scope: Deactivated successfully.
Dec  2 17:31:40 compute-0 podman[259746]: 2025-12-02 17:31:40.07114777 +0000 UTC m=+0.083253989 container died ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.071 189463 INFO os_vif [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2c:97,bridge_name='br-int',has_traffic_filtering=True,id=b7169bf1-4de3-40ed-bda2-cdae863fd264,network=Network(82b562d0-fe3d-43c8-b78e-fc2eee29ef70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7169bf1-4d')#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.073 189463 INFO nova.virt.libvirt.driver [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Deleting instance files /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e_del#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.074 189463 INFO nova.virt.libvirt.driver [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Deletion of /var/lib/nova/instances/2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e_del complete#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.090 189463 DEBUG nova.compute.manager [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-unplugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.091 189463 DEBUG oslo_concurrency.lockutils [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.091 189463 DEBUG oslo_concurrency.lockutils [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.091 189463 DEBUG oslo_concurrency.lockutils [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.091 189463 DEBUG nova.compute.manager [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] No waiting events found dispatching network-vif-unplugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.091 189463 DEBUG nova.compute.manager [req-99ca41e3-6c5a-47bb-9b70-71a70d7d8edd req-76409454-1f30-4a72-9436-1eb83d0e71ca b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-unplugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 17:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42-userdata-shm.mount: Deactivated successfully.
Dec  2 17:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4efc83dbfd8d58178d2795a0d0fe3a25999803f1770a67338433f2f68c2e555b-merged.mount: Deactivated successfully.
Dec  2 17:31:40 compute-0 podman[259746]: 2025-12-02 17:31:40.13080271 +0000 UTC m=+0.142908919 container cleanup ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.149 189463 INFO nova.compute.manager [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.150 189463 DEBUG oslo.service.loopingcall [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.150 189463 DEBUG nova.compute.manager [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.150 189463 DEBUG nova.network.neutron [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 17:31:40 compute-0 systemd[1]: libpod-conmon-ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42.scope: Deactivated successfully.
Dec  2 17:31:40 compute-0 podman[259790]: 2025-12-02 17:31:40.2205038 +0000 UTC m=+0.056896817 container remove ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.229 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[42e5a2f2-66ce-4efb-b3f5-d81b13581f33]: (4, ('Tue Dec  2 05:31:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70 (ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42)\nad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42\nTue Dec  2 05:31:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70 (ad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42)\nad7dc4c2e9724dd6eba640665721f9b394829fdb9c46d6613d002c8369e10b42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.232 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc3112c-acdd-4068-a4da-4c6892002633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.233 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82b562d0-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.237 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:40 compute-0 kernel: tap82b562d0-f0: left promiscuous mode
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.259 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.265 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[2426015a-f05f-4145-bb7b-1be25872cdbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.286 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[319b64b0-9803-40eb-a7a2-497281bba6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.288 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe95f2c-02ba-4780-a73a-d4cec3041b77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.310 240010 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee3564c-e13a-4943-b0a0-34b62bf9ccf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539427, 'reachable_time': 39606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259805, 'error': None, 'target': 'ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.314 106947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82b562d0-fe3d-43c8-b78e-fc2eee29ef70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 17:31:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d82b562d0\x2dfe3d\x2d43c8\x2db78e\x2dfc2eee29ef70.mount: Deactivated successfully.
Dec  2 17:31:40 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:31:40.314 106947 DEBUG oslo.privsep.daemon [-] privsep: reply[356da8d8-b21b-4d31-955e-1b5248d2cfde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.679 189463 DEBUG nova.network.neutron [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.708 189463 INFO nova.compute.manager [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Took 0.56 seconds to deallocate network for instance.#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.728 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.764 189463 DEBUG nova.compute.manager [req-82db1851-beb6-4fee-bba9-adc1b56caa26 req-663e2de4-c068-47af-bb75-8b0543a439f7 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-deleted-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.766 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.766 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.769 189463 WARNING nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.769 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Triggering sync for uuid 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.769 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.836 189463 DEBUG nova.compute.provider_tree [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.849 189463 DEBUG nova.scheduler.client.report [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.869 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.889 189463 INFO nova.scheduler.client.report [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Deleted allocations for instance 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.946 189463 DEBUG oslo_concurrency.lockutils [None req-cffb74e5-3f9f-4763-8fb6-a8771dbf68b4 5673ab6de24147cb96ea139c0ad6cb0e d97265454999468fb261510e60c81b0e - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.947 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.947 189463 INFO nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  2 17:31:40 compute-0 nova_compute[189459]: 2025-12-02 17:31:40.948 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.152 189463 DEBUG nova.compute.manager [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.152 189463 DEBUG oslo_concurrency.lockutils [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Acquiring lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.152 189463 DEBUG oslo_concurrency.lockutils [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.152 189463 DEBUG oslo_concurrency.lockutils [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] Lock "2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.153 189463 DEBUG nova.compute.manager [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] No waiting events found dispatching network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 17:31:42 compute-0 nova_compute[189459]: 2025-12-02 17:31:42.153 189463 WARNING nova.compute.manager [req-bf666aa9-e655-4290-9bd1-d4eb29b14435 req-bbe93bc7-634f-40dc-ab74-06c6ae8dd315 b94c59329a854884b28cb69a5f3156c6 cdde98479f5349b5ac14aaac8392bdae - - default default] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Received unexpected event network-vif-plugged-b7169bf1-4de3-40ed-bda2-cdae863fd264 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 17:31:42 compute-0 podman[259808]: 2025-12-02 17:31:42.303776823 +0000 UTC m=+0.101808303 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:31:42 compute-0 podman[259807]: 2025-12-02 17:31:42.327634019 +0000 UTC m=+0.131439433 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:31:42 compute-0 podman[259806]: 2025-12-02 17:31:42.344742175 +0000 UTC m=+0.156195753 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:31:44 compute-0 nova_compute[189459]: 2025-12-02 17:31:44.030 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:45 compute-0 nova_compute[189459]: 2025-12-02 17:31:45.063 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:45 compute-0 nova_compute[189459]: 2025-12-02 17:31:45.276 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764696690.2745883, 3a077761-3f4d-47af-aea2-9c3255ed7868 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:31:45 compute-0 nova_compute[189459]: 2025-12-02 17:31:45.277 189463 INFO nova.compute.manager [-] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:31:45 compute-0 nova_compute[189459]: 2025-12-02 17:31:45.309 189463 DEBUG nova.compute.manager [None req-4d76906e-3ab9-43bc-b23d-58559c98e219 - - - - - -] [instance: 3a077761-3f4d-47af-aea2-9c3255ed7868] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:31:47 compute-0 nova_compute[189459]: 2025-12-02 17:31:47.450 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:47 compute-0 nova_compute[189459]: 2025-12-02 17:31:47.452 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:31:47 compute-0 nova_compute[189459]: 2025-12-02 17:31:47.484 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:31:48 compute-0 nova_compute[189459]: 2025-12-02 17:31:48.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:49 compute-0 nova_compute[189459]: 2025-12-02 17:31:49.033 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:50 compute-0 nova_compute[189459]: 2025-12-02 17:31:50.067 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:51 compute-0 nova_compute[189459]: 2025-12-02 17:31:51.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.436 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.437 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.437 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.437 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:31:52 compute-0 podman[259880]: 2025-12-02 17:31:52.594599542 +0000 UTC m=+0.097685793 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc.)
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.751 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.752 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5350MB free_disk=72.12287902832031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.752 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.752 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.856 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.857 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.881 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.895 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.917 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:31:52 compute-0 nova_compute[189459]: 2025-12-02 17:31:52.917 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:31:54 compute-0 nova_compute[189459]: 2025-12-02 17:31:54.035 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:55 compute-0 nova_compute[189459]: 2025-12-02 17:31:55.031 189463 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764696700.0302055, 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 17:31:55 compute-0 nova_compute[189459]: 2025-12-02 17:31:55.033 189463 INFO nova.compute.manager [-] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] VM Stopped (Lifecycle Event)#033[00m
Dec  2 17:31:55 compute-0 nova_compute[189459]: 2025-12-02 17:31:55.067 189463 DEBUG nova.compute.manager [None req-9e47278d-21cb-4331-9b7e-8df3016c0b32 - - - - - -] [instance: 2f675e14-c6f0-4f48-bc90-9d0cf0c19f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 17:31:55 compute-0 nova_compute[189459]: 2025-12-02 17:31:55.071 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:55 compute-0 nova_compute[189459]: 2025-12-02 17:31:55.702 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:56 compute-0 nova_compute[189459]: 2025-12-02 17:31:56.917 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:58 compute-0 nova_compute[189459]: 2025-12-02 17:31:58.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:58 compute-0 nova_compute[189459]: 2025-12-02 17:31:58.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:31:59 compute-0 nova_compute[189459]: 2025-12-02 17:31:59.037 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:31:59 compute-0 podman[259901]: 2025-12-02 17:31:59.274766352 +0000 UTC m=+0.093015639 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:31:59 compute-0 podman[259900]: 2025-12-02 17:31:59.283749291 +0000 UTC m=+0.104083274 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Dec  2 17:31:59 compute-0 nova_compute[189459]: 2025-12-02 17:31:59.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:31:59 compute-0 podman[203941]: time="2025-12-02T17:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:31:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:31:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Dec  2 17:32:00 compute-0 nova_compute[189459]: 2025-12-02 17:32:00.074 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: ERROR   17:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: ERROR   17:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: ERROR   17:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: ERROR   17:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: ERROR   17:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:32:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:32:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:32:01.905 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:32:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:32:01.907 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:32:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:32:01.908 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:32:02 compute-0 nova_compute[189459]: 2025-12-02 17:32:02.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:03 compute-0 podman[259938]: 2025-12-02 17:32:03.282128278 +0000 UTC m=+0.098816233 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Dec  2 17:32:03 compute-0 podman[259939]: 2025-12-02 17:32:03.290931762 +0000 UTC m=+0.094258101 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  2 17:32:03 compute-0 podman[259937]: 2025-12-02 17:32:03.32501597 +0000 UTC m=+0.139878996 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  2 17:32:04 compute-0 nova_compute[189459]: 2025-12-02 17:32:04.043 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:05 compute-0 nova_compute[189459]: 2025-12-02 17:32:05.078 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:09 compute-0 nova_compute[189459]: 2025-12-02 17:32:09.045 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:10 compute-0 nova_compute[189459]: 2025-12-02 17:32:10.083 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:13 compute-0 podman[259995]: 2025-12-02 17:32:13.277944997 +0000 UTC m=+0.091515009 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:32:13 compute-0 podman[259994]: 2025-12-02 17:32:13.296617365 +0000 UTC m=+0.114085641 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:32:13 compute-0 podman[259993]: 2025-12-02 17:32:13.336745334 +0000 UTC m=+0.153470070 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller)
Dec  2 17:32:14 compute-0 nova_compute[189459]: 2025-12-02 17:32:14.048 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:15 compute-0 nova_compute[189459]: 2025-12-02 17:32:15.086 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:19 compute-0 nova_compute[189459]: 2025-12-02 17:32:19.050 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:20 compute-0 nova_compute[189459]: 2025-12-02 17:32:20.089 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:23 compute-0 podman[260067]: 2025-12-02 17:32:23.262198407 +0000 UTC m=+0.084335048 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec  2 17:32:24 compute-0 nova_compute[189459]: 2025-12-02 17:32:24.052 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:25 compute-0 nova_compute[189459]: 2025-12-02 17:32:25.093 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:29 compute-0 nova_compute[189459]: 2025-12-02 17:32:29.057 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:29 compute-0 podman[203941]: time="2025-12-02T17:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:32:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:32:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Dec  2 17:32:30 compute-0 nova_compute[189459]: 2025-12-02 17:32:30.097 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:30 compute-0 podman[260088]: 2025-12-02 17:32:30.283793773 +0000 UTC m=+0.106408786 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  2 17:32:30 compute-0 podman[260087]: 2025-12-02 17:32:30.302948824 +0000 UTC m=+0.122327250 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: ERROR   17:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: ERROR   17:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: ERROR   17:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: ERROR   17:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: ERROR   17:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:32:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:32:34 compute-0 nova_compute[189459]: 2025-12-02 17:32:34.060 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:34 compute-0 podman[260125]: 2025-12-02 17:32:34.272085254 +0000 UTC m=+0.097223912 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 17:32:34 compute-0 podman[260127]: 2025-12-02 17:32:34.315325996 +0000 UTC m=+0.118109198 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:32:34 compute-0 podman[260126]: 2025-12-02 17:32:34.325321602 +0000 UTC m=+0.133585340 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec  2 17:32:35 compute-0 nova_compute[189459]: 2025-12-02 17:32:35.102 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:38 compute-0 ovn_controller[97975]: 2025-12-02T17:32:38Z|00179|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Dec  2 17:32:39 compute-0 nova_compute[189459]: 2025-12-02 17:32:39.061 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:39 compute-0 nova_compute[189459]: 2025-12-02 17:32:39.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:40 compute-0 nova_compute[189459]: 2025-12-02 17:32:40.106 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:44 compute-0 nova_compute[189459]: 2025-12-02 17:32:44.064 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:44 compute-0 podman[260183]: 2025-12-02 17:32:44.267084821 +0000 UTC m=+0.090748429 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:32:44 compute-0 podman[260182]: 2025-12-02 17:32:44.269252209 +0000 UTC m=+0.094031707 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:32:44 compute-0 podman[260181]: 2025-12-02 17:32:44.29033101 +0000 UTC m=+0.123826420 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  2 17:32:45 compute-0 nova_compute[189459]: 2025-12-02 17:32:45.110 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:47 compute-0 nova_compute[189459]: 2025-12-02 17:32:47.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:47 compute-0 nova_compute[189459]: 2025-12-02 17:32:47.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:32:47 compute-0 nova_compute[189459]: 2025-12-02 17:32:47.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:32:47 compute-0 nova_compute[189459]: 2025-12-02 17:32:47.441 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:32:49 compute-0 nova_compute[189459]: 2025-12-02 17:32:49.068 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:50 compute-0 nova_compute[189459]: 2025-12-02 17:32:50.113 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:50 compute-0 nova_compute[189459]: 2025-12-02 17:32:50.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:51 compute-0 nova_compute[189459]: 2025-12-02 17:32:51.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.442 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.443 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.923 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.924 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5352MB free_disk=72.12287902832031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.924 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.925 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.985 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:32:52 compute-0 nova_compute[189459]: 2025-12-02 17:32:52.986 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:32:53 compute-0 nova_compute[189459]: 2025-12-02 17:32:53.008 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:32:53 compute-0 nova_compute[189459]: 2025-12-02 17:32:53.025 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:32:53 compute-0 nova_compute[189459]: 2025-12-02 17:32:53.028 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:32:53 compute-0 nova_compute[189459]: 2025-12-02 17:32:53.028 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:32:54 compute-0 nova_compute[189459]: 2025-12-02 17:32:54.073 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:54 compute-0 podman[260249]: 2025-12-02 17:32:54.296526906 +0000 UTC m=+0.118724834 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Dec  2 17:32:55 compute-0 nova_compute[189459]: 2025-12-02 17:32:55.117 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:56 compute-0 nova_compute[189459]: 2025-12-02 17:32:56.025 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:56 compute-0 nova_compute[189459]: 2025-12-02 17:32:56.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:59 compute-0 nova_compute[189459]: 2025-12-02 17:32:59.075 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:32:59 compute-0 nova_compute[189459]: 2025-12-02 17:32:59.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:32:59 compute-0 podman[203941]: time="2025-12-02T17:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:32:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:32:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Dec  2 17:33:00 compute-0 nova_compute[189459]: 2025-12-02 17:33:00.121 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:00 compute-0 nova_compute[189459]: 2025-12-02 17:33:00.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:00 compute-0 nova_compute[189459]: 2025-12-02 17:33:00.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:33:01 compute-0 podman[260273]: 2025-12-02 17:33:01.28429629 +0000 UTC m=+0.105161442 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec  2 17:33:01 compute-0 podman[260272]: 2025-12-02 17:33:01.285168484 +0000 UTC m=+0.115502719 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: ERROR   17:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: ERROR   17:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: ERROR   17:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: ERROR   17:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: ERROR   17:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:33:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:33:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:33:01.906 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:33:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:33:01.907 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:33:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:33:01.907 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.060 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.061 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00814247a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:33:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:33:04 compute-0 nova_compute[189459]: 2025-12-02 17:33:04.079 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:04 compute-0 nova_compute[189459]: 2025-12-02 17:33:04.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:05 compute-0 nova_compute[189459]: 2025-12-02 17:33:05.125 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:05 compute-0 podman[260309]: 2025-12-02 17:33:05.26239836 +0000 UTC m=+0.076863089 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, release=1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:33:05 compute-0 podman[260308]: 2025-12-02 17:33:05.288928266 +0000 UTC m=+0.104409862 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 17:33:05 compute-0 podman[260310]: 2025-12-02 17:33:05.316596814 +0000 UTC m=+0.122885515 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:33:08 compute-0 systemd-logind[790]: New session 31 of user zuul.
Dec  2 17:33:08 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec  2 17:33:09 compute-0 nova_compute[189459]: 2025-12-02 17:33:09.082 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:10 compute-0 nova_compute[189459]: 2025-12-02 17:33:10.127 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:14 compute-0 nova_compute[189459]: 2025-12-02 17:33:14.085 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:14 compute-0 ovs-vsctl[260539]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  2 17:33:14 compute-0 podman[260579]: 2025-12-02 17:33:14.766998531 +0000 UTC m=+0.074760613 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:33:14 compute-0 podman[260578]: 2025-12-02 17:33:14.771977964 +0000 UTC m=+0.081345599 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:33:14 compute-0 podman[260577]: 2025-12-02 17:33:14.856433284 +0000 UTC m=+0.149395001 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 17:33:15 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 260393 (sos)
Dec  2 17:33:15 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  2 17:33:15 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  2 17:33:15 compute-0 nova_compute[189459]: 2025-12-02 17:33:15.130 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:15 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  2 17:33:15 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  2 17:33:15 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  2 17:33:19 compute-0 nova_compute[189459]: 2025-12-02 17:33:19.087 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:19 compute-0 systemd[1]: Starting Hostname Service...
Dec  2 17:33:19 compute-0 systemd[1]: Started Hostname Service.
Dec  2 17:33:20 compute-0 nova_compute[189459]: 2025-12-02 17:33:20.133 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:24 compute-0 nova_compute[189459]: 2025-12-02 17:33:24.090 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:25 compute-0 nova_compute[189459]: 2025-12-02 17:33:25.137 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:25 compute-0 podman[261659]: 2025-12-02 17:33:25.306400141 +0000 UTC m=+0.078301227 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  2 17:33:28 compute-0 ovs-appctl[262427]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  2 17:33:28 compute-0 ovs-appctl[262432]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  2 17:33:28 compute-0 ovs-appctl[262437]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  2 17:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1649692773-merged.mount: Deactivated successfully.
Dec  2 17:33:29 compute-0 nova_compute[189459]: 2025-12-02 17:33:29.091 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:29 compute-0 podman[203941]: time="2025-12-02T17:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:33:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:33:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Dec  2 17:33:30 compute-0 nova_compute[189459]: 2025-12-02 17:33:30.139 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: ERROR   17:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: ERROR   17:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: ERROR   17:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: ERROR   17:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: ERROR   17:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:33:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:33:32 compute-0 podman[263315]: 2025-12-02 17:33:32.256240375 +0000 UTC m=+0.074648930 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 17:33:32 compute-0 podman[263308]: 2025-12-02 17:33:32.279191357 +0000 UTC m=+0.099882712 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, io.buildah.version=1.41.4)
Dec  2 17:33:34 compute-0 nova_compute[189459]: 2025-12-02 17:33:34.092 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:35 compute-0 nova_compute[189459]: 2025-12-02 17:33:35.141 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:35 compute-0 podman[263431]: 2025-12-02 17:33:35.877938698 +0000 UTC m=+0.103675243 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, architecture=x86_64)
Dec  2 17:33:35 compute-0 podman[263428]: 2025-12-02 17:33:35.897097748 +0000 UTC m=+0.124205880 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 17:33:35 compute-0 podman[263432]: 2025-12-02 17:33:35.900619282 +0000 UTC m=+0.119678119 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:33:38 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  2 17:33:39 compute-0 nova_compute[189459]: 2025-12-02 17:33:39.095 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:39 compute-0 systemd[1]: Starting Time & Date Service...
Dec  2 17:33:39 compute-0 nova_compute[189459]: 2025-12-02 17:33:39.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:39 compute-0 systemd[1]: Started Time & Date Service.
Dec  2 17:33:40 compute-0 nova_compute[189459]: 2025-12-02 17:33:40.144 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:44 compute-0 nova_compute[189459]: 2025-12-02 17:33:44.100 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:45 compute-0 nova_compute[189459]: 2025-12-02 17:33:45.148 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:45 compute-0 podman[263937]: 2025-12-02 17:33:45.275829285 +0000 UTC m=+0.090870632 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:33:45 compute-0 podman[263936]: 2025-12-02 17:33:45.277218102 +0000 UTC m=+0.099757229 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:33:45 compute-0 podman[263935]: 2025-12-02 17:33:45.344063263 +0000 UTC m=+0.166472427 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller)
Dec  2 17:33:48 compute-0 nova_compute[189459]: 2025-12-02 17:33:48.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:48 compute-0 nova_compute[189459]: 2025-12-02 17:33:48.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:33:48 compute-0 nova_compute[189459]: 2025-12-02 17:33:48.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:33:48 compute-0 nova_compute[189459]: 2025-12-02 17:33:48.436 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:33:49 compute-0 nova_compute[189459]: 2025-12-02 17:33:49.105 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:50 compute-0 nova_compute[189459]: 2025-12-02 17:33:50.152 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:51 compute-0 nova_compute[189459]: 2025-12-02 17:33:51.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.440 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.440 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.798 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.801 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=71.8482894897461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.802 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.802 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.870 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.871 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.914 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.930 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.957 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:33:53 compute-0 nova_compute[189459]: 2025-12-02 17:33:53.958 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:33:54 compute-0 nova_compute[189459]: 2025-12-02 17:33:54.107 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:55 compute-0 nova_compute[189459]: 2025-12-02 17:33:55.156 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:56 compute-0 podman[264006]: 2025-12-02 17:33:56.30145552 +0000 UTC m=+0.121623282 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec  2 17:33:56 compute-0 nova_compute[189459]: 2025-12-02 17:33:56.960 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:59 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec  2 17:33:59 compute-0 systemd[1]: session-31.scope: Consumed 1min 36.018s CPU time, 646.5M memory peak, read 251.6M from disk, written 36.5M to disk.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Session 31 logged out. Waiting for processes to exit.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Removed session 31.
Dec  2 17:33:59 compute-0 nova_compute[189459]: 2025-12-02 17:33:59.112 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:33:59 compute-0 systemd-logind[790]: New session 32 of user zuul.
Dec  2 17:33:59 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec  2 17:33:59 compute-0 nova_compute[189459]: 2025-12-02 17:33:59.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:33:59 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Session 32 logged out. Waiting for processes to exit.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Removed session 32.
Dec  2 17:33:59 compute-0 systemd-logind[790]: New session 33 of user zuul.
Dec  2 17:33:59 compute-0 systemd[1]: Started Session 33 of User zuul.
Dec  2 17:33:59 compute-0 podman[203941]: time="2025-12-02T17:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:33:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:33:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4323 "" "Go-http-client/1.1"
Dec  2 17:33:59 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Session 33 logged out. Waiting for processes to exit.
Dec  2 17:33:59 compute-0 systemd-logind[790]: Removed session 33.
Dec  2 17:34:00 compute-0 nova_compute[189459]: 2025-12-02 17:34:00.160 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:01 compute-0 nova_compute[189459]: 2025-12-02 17:34:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:01 compute-0 nova_compute[189459]: 2025-12-02 17:34:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: ERROR   17:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: ERROR   17:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: ERROR   17:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: ERROR   17:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: ERROR   17:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:34:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:34:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:34:01.908 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:34:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:34:01.909 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:34:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:34:01.909 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:34:03 compute-0 podman[264085]: 2025-12-02 17:34:03.26508686 +0000 UTC m=+0.084910083 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:34:03 compute-0 podman[264086]: 2025-12-02 17:34:03.324236586 +0000 UTC m=+0.132261965 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:34:04 compute-0 nova_compute[189459]: 2025-12-02 17:34:04.113 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:05 compute-0 nova_compute[189459]: 2025-12-02 17:34:05.165 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:05 compute-0 nova_compute[189459]: 2025-12-02 17:34:05.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:06 compute-0 podman[264124]: 2025-12-02 17:34:06.268308415 +0000 UTC m=+0.082443728 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, name=ubi9, vcs-type=git, config_id=edpm, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543)
Dec  2 17:34:06 compute-0 podman[264123]: 2025-12-02 17:34:06.268343876 +0000 UTC m=+0.093080701 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:34:06 compute-0 podman[264125]: 2025-12-02 17:34:06.279250716 +0000 UTC m=+0.092398382 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:34:09 compute-0 nova_compute[189459]: 2025-12-02 17:34:09.115 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:09 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 17:34:09 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 17:34:10 compute-0 nova_compute[189459]: 2025-12-02 17:34:10.169 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:14 compute-0 nova_compute[189459]: 2025-12-02 17:34:14.116 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:15 compute-0 nova_compute[189459]: 2025-12-02 17:34:15.173 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:16 compute-0 podman[264182]: 2025-12-02 17:34:16.249756838 +0000 UTC m=+0.075757129 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:34:16 compute-0 podman[264183]: 2025-12-02 17:34:16.284310079 +0000 UTC m=+0.099246326 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:34:16 compute-0 podman[264181]: 2025-12-02 17:34:16.29862733 +0000 UTC m=+0.126911312 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:34:19 compute-0 nova_compute[189459]: 2025-12-02 17:34:19.120 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:20 compute-0 nova_compute[189459]: 2025-12-02 17:34:20.176 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:24 compute-0 nova_compute[189459]: 2025-12-02 17:34:24.122 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:25 compute-0 nova_compute[189459]: 2025-12-02 17:34:25.180 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:27 compute-0 podman[264256]: 2025-12-02 17:34:27.270677438 +0000 UTC m=+0.089205548 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6)
Dec  2 17:34:29 compute-0 nova_compute[189459]: 2025-12-02 17:34:29.126 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:29 compute-0 podman[203941]: time="2025-12-02T17:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:34:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:34:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Dec  2 17:34:30 compute-0 nova_compute[189459]: 2025-12-02 17:34:30.183 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: ERROR   17:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: ERROR   17:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: ERROR   17:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: ERROR   17:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: ERROR   17:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:34:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:34:34 compute-0 nova_compute[189459]: 2025-12-02 17:34:34.129 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:34 compute-0 podman[264276]: 2025-12-02 17:34:34.271742336 +0000 UTC m=+0.094242852 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:34:34 compute-0 podman[264277]: 2025-12-02 17:34:34.279489212 +0000 UTC m=+0.093583024 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:34:35 compute-0 nova_compute[189459]: 2025-12-02 17:34:35.188 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:37 compute-0 podman[264316]: 2025-12-02 17:34:37.278146505 +0000 UTC m=+0.079915220 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  2 17:34:37 compute-0 podman[264314]: 2025-12-02 17:34:37.283295972 +0000 UTC m=+0.094224351 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  2 17:34:37 compute-0 podman[264315]: 2025-12-02 17:34:37.283554249 +0000 UTC m=+0.095794283 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:34:39 compute-0 nova_compute[189459]: 2025-12-02 17:34:39.132 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:39 compute-0 nova_compute[189459]: 2025-12-02 17:34:39.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:40 compute-0 nova_compute[189459]: 2025-12-02 17:34:40.193 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:44 compute-0 nova_compute[189459]: 2025-12-02 17:34:44.135 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:45 compute-0 nova_compute[189459]: 2025-12-02 17:34:45.197 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:47 compute-0 podman[264376]: 2025-12-02 17:34:47.269674137 +0000 UTC m=+0.087572954 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:34:47 compute-0 podman[264375]: 2025-12-02 17:34:47.296692517 +0000 UTC m=+0.104077134 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:34:47 compute-0 podman[264374]: 2025-12-02 17:34:47.31970223 +0000 UTC m=+0.130946400 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  2 17:34:49 compute-0 nova_compute[189459]: 2025-12-02 17:34:49.140 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:50 compute-0 nova_compute[189459]: 2025-12-02 17:34:50.201 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:50 compute-0 nova_compute[189459]: 2025-12-02 17:34:50.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:50 compute-0 nova_compute[189459]: 2025-12-02 17:34:50.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:34:50 compute-0 nova_compute[189459]: 2025-12-02 17:34:50.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:34:50 compute-0 nova_compute[189459]: 2025-12-02 17:34:50.427 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:34:52 compute-0 nova_compute[189459]: 2025-12-02 17:34:52.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:53 compute-0 nova_compute[189459]: 2025-12-02 17:34:53.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:54 compute-0 nova_compute[189459]: 2025-12-02 17:34:54.139 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.205 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.461 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.508 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.508 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.509 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.509 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.900 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.903 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5321MB free_disk=72.12223815917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.903 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:34:55 compute-0 nova_compute[189459]: 2025-12-02 17:34:55.904 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.037 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.038 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.105 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.135 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.157 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:34:56 compute-0 nova_compute[189459]: 2025-12-02 17:34:56.158 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:34:58 compute-0 podman[264448]: 2025-12-02 17:34:58.286092684 +0000 UTC m=+0.113274569 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible)
Dec  2 17:34:59 compute-0 nova_compute[189459]: 2025-12-02 17:34:59.107 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:34:59 compute-0 nova_compute[189459]: 2025-12-02 17:34:59.142 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:34:59 compute-0 podman[203941]: time="2025-12-02T17:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:34:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:34:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4313 "" "Go-http-client/1.1"
Dec  2 17:35:00 compute-0 nova_compute[189459]: 2025-12-02 17:35:00.209 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:00 compute-0 nova_compute[189459]: 2025-12-02 17:35:00.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:01 compute-0 nova_compute[189459]: 2025-12-02 17:35:01.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:01 compute-0 nova_compute[189459]: 2025-12-02 17:35:01.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: ERROR   17:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: ERROR   17:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: ERROR   17:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: ERROR   17:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: ERROR   17:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:35:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:35:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:35:01.910 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:35:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:35:01.911 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:35:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:35:01.911 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.061 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.062 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:35:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:35:04 compute-0 nova_compute[189459]: 2025-12-02 17:35:04.144 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:05 compute-0 nova_compute[189459]: 2025-12-02 17:35:05.212 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:05 compute-0 podman[264469]: 2025-12-02 17:35:05.256122886 +0000 UTC m=+0.088023266 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 17:35:05 compute-0 podman[264470]: 2025-12-02 17:35:05.274253519 +0000 UTC m=+0.100159520 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:35:06 compute-0 nova_compute[189459]: 2025-12-02 17:35:06.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:08 compute-0 podman[264510]: 2025-12-02 17:35:08.241580387 +0000 UTC m=+0.065899506 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:35:08 compute-0 podman[264509]: 2025-12-02 17:35:08.246505169 +0000 UTC m=+0.069413061 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec  2 17:35:08 compute-0 podman[264508]: 2025-12-02 17:35:08.249410406 +0000 UTC m=+0.082982982 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:35:09 compute-0 nova_compute[189459]: 2025-12-02 17:35:09.146 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:10 compute-0 nova_compute[189459]: 2025-12-02 17:35:10.215 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:14 compute-0 nova_compute[189459]: 2025-12-02 17:35:14.147 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:15 compute-0 nova_compute[189459]: 2025-12-02 17:35:15.219 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:18 compute-0 podman[264566]: 2025-12-02 17:35:18.31119634 +0000 UTC m=+0.115255262 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:35:18 compute-0 podman[264565]: 2025-12-02 17:35:18.32696776 +0000 UTC m=+0.134764742 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:35:18 compute-0 podman[264564]: 2025-12-02 17:35:18.352763837 +0000 UTC m=+0.162288105 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  2 17:35:19 compute-0 nova_compute[189459]: 2025-12-02 17:35:19.150 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:20 compute-0 nova_compute[189459]: 2025-12-02 17:35:20.224 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:24 compute-0 nova_compute[189459]: 2025-12-02 17:35:24.152 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:25 compute-0 nova_compute[189459]: 2025-12-02 17:35:25.228 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:29 compute-0 nova_compute[189459]: 2025-12-02 17:35:29.154 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:29 compute-0 podman[264635]: 2025-12-02 17:35:29.286465054 +0000 UTC m=+0.107307170 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm)
Dec  2 17:35:29 compute-0 podman[203941]: time="2025-12-02T17:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:35:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:35:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4315 "" "Go-http-client/1.1"
Dec  2 17:35:30 compute-0 nova_compute[189459]: 2025-12-02 17:35:30.233 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: ERROR   17:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: ERROR   17:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: ERROR   17:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: ERROR   17:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: ERROR   17:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:35:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:35:34 compute-0 nova_compute[189459]: 2025-12-02 17:35:34.157 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:35 compute-0 nova_compute[189459]: 2025-12-02 17:35:35.236 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:36 compute-0 podman[264656]: 2025-12-02 17:35:36.28439447 +0000 UTC m=+0.107793283 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:35:36 compute-0 podman[264657]: 2025-12-02 17:35:36.305728508 +0000 UTC m=+0.115111818 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:35:39 compute-0 nova_compute[189459]: 2025-12-02 17:35:39.162 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:39 compute-0 podman[264697]: 2025-12-02 17:35:39.255627902 +0000 UTC m=+0.071604299 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec  2 17:35:39 compute-0 podman[264696]: 2025-12-02 17:35:39.288926109 +0000 UTC m=+0.098473914 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, name=ubi9)
Dec  2 17:35:39 compute-0 podman[264695]: 2025-12-02 17:35:39.291675823 +0000 UTC m=+0.107533176 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:35:39 compute-0 nova_compute[189459]: 2025-12-02 17:35:39.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:40 compute-0 nova_compute[189459]: 2025-12-02 17:35:40.240 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:44 compute-0 nova_compute[189459]: 2025-12-02 17:35:44.164 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:45 compute-0 nova_compute[189459]: 2025-12-02 17:35:45.243 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:49 compute-0 nova_compute[189459]: 2025-12-02 17:35:49.168 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:49 compute-0 podman[264755]: 2025-12-02 17:35:49.235329589 +0000 UTC m=+0.061686204 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:35:49 compute-0 podman[264756]: 2025-12-02 17:35:49.268047771 +0000 UTC m=+0.089071644 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:35:49 compute-0 podman[264754]: 2025-12-02 17:35:49.271438121 +0000 UTC m=+0.101011462 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:35:50 compute-0 nova_compute[189459]: 2025-12-02 17:35:50.246 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:51 compute-0 nova_compute[189459]: 2025-12-02 17:35:51.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:51 compute-0 nova_compute[189459]: 2025-12-02 17:35:51.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:35:51 compute-0 nova_compute[189459]: 2025-12-02 17:35:51.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:35:51 compute-0 nova_compute[189459]: 2025-12-02 17:35:51.432 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:35:53 compute-0 nova_compute[189459]: 2025-12-02 17:35:53.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:54 compute-0 nova_compute[189459]: 2025-12-02 17:35:54.171 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:54 compute-0 nova_compute[189459]: 2025-12-02 17:35:54.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:55 compute-0 nova_compute[189459]: 2025-12-02 17:35:55.249 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.445 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.445 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.813 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.814 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5321MB free_disk=72.12223815917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.814 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:35:57 compute-0 nova_compute[189459]: 2025-12-02 17:35:57.815 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.028 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.029 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.123 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.254 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.255 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.268 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.298 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.324 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.337 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.339 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:35:58 compute-0 nova_compute[189459]: 2025-12-02 17:35:58.340 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:35:59 compute-0 nova_compute[189459]: 2025-12-02 17:35:59.175 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:35:59 compute-0 nova_compute[189459]: 2025-12-02 17:35:59.340 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:59 compute-0 nova_compute[189459]: 2025-12-02 17:35:59.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:35:59 compute-0 podman[203941]: time="2025-12-02T17:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:35:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:35:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Dec  2 17:36:00 compute-0 nova_compute[189459]: 2025-12-02 17:36:00.252 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:00 compute-0 podman[264829]: 2025-12-02 17:36:00.255472928 +0000 UTC m=+0.076727125 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: ERROR   17:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: ERROR   17:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: ERROR   17:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: ERROR   17:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:36:01 compute-0 nova_compute[189459]: 2025-12-02 17:36:01.430 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: ERROR   17:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:36:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:36:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:36:01.912 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:36:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:36:01.912 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:36:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:36:01.913 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:36:03 compute-0 nova_compute[189459]: 2025-12-02 17:36:03.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:03 compute-0 nova_compute[189459]: 2025-12-02 17:36:03.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:36:04 compute-0 nova_compute[189459]: 2025-12-02 17:36:04.179 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:05 compute-0 nova_compute[189459]: 2025-12-02 17:36:05.255 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:06 compute-0 nova_compute[189459]: 2025-12-02 17:36:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:06 compute-0 nova_compute[189459]: 2025-12-02 17:36:06.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:36:06 compute-0 nova_compute[189459]: 2025-12-02 17:36:06.427 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:36:07 compute-0 podman[264850]: 2025-12-02 17:36:07.307110834 +0000 UTC m=+0.122082313 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:36:07 compute-0 podman[264851]: 2025-12-02 17:36:07.344264414 +0000 UTC m=+0.149575716 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:36:07 compute-0 nova_compute[189459]: 2025-12-02 17:36:07.427 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:09 compute-0 nova_compute[189459]: 2025-12-02 17:36:09.184 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:10 compute-0 nova_compute[189459]: 2025-12-02 17:36:10.257 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:10 compute-0 podman[264889]: 2025-12-02 17:36:10.297960228 +0000 UTC m=+0.125886184 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 17:36:10 compute-0 podman[264890]: 2025-12-02 17:36:10.298696448 +0000 UTC m=+0.108260525 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, architecture=x86_64, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:36:10 compute-0 podman[264896]: 2025-12-02 17:36:10.297717771 +0000 UTC m=+0.109485817 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:36:14 compute-0 nova_compute[189459]: 2025-12-02 17:36:14.186 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:15 compute-0 nova_compute[189459]: 2025-12-02 17:36:15.261 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:19 compute-0 nova_compute[189459]: 2025-12-02 17:36:19.189 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:20 compute-0 nova_compute[189459]: 2025-12-02 17:36:20.265 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:20 compute-0 podman[264950]: 2025-12-02 17:36:20.278154831 +0000 UTC m=+0.094253962 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:36:20 compute-0 podman[264951]: 2025-12-02 17:36:20.307287607 +0000 UTC m=+0.127891088 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:36:20 compute-0 podman[264949]: 2025-12-02 17:36:20.337315407 +0000 UTC m=+0.167395080 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 17:36:23 compute-0 nova_compute[189459]: 2025-12-02 17:36:23.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:23 compute-0 nova_compute[189459]: 2025-12-02 17:36:23.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:36:24 compute-0 nova_compute[189459]: 2025-12-02 17:36:24.194 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:25 compute-0 nova_compute[189459]: 2025-12-02 17:36:25.269 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:29 compute-0 nova_compute[189459]: 2025-12-02 17:36:29.194 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:29 compute-0 podman[203941]: time="2025-12-02T17:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:36:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:36:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Dec  2 17:36:30 compute-0 nova_compute[189459]: 2025-12-02 17:36:30.274 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:31 compute-0 podman[265020]: 2025-12-02 17:36:31.254854761 +0000 UTC m=+0.082912030 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: ERROR   17:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: ERROR   17:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: ERROR   17:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: ERROR   17:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: ERROR   17:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:36:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:36:34 compute-0 nova_compute[189459]: 2025-12-02 17:36:34.197 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:35 compute-0 nova_compute[189459]: 2025-12-02 17:36:35.277 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:38 compute-0 podman[265041]: 2025-12-02 17:36:38.260942383 +0000 UTC m=+0.095217108 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:36:38 compute-0 podman[265042]: 2025-12-02 17:36:38.260933943 +0000 UTC m=+0.091008076 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 17:36:39 compute-0 nova_compute[189459]: 2025-12-02 17:36:39.200 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:39 compute-0 nova_compute[189459]: 2025-12-02 17:36:39.429 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:40 compute-0 nova_compute[189459]: 2025-12-02 17:36:40.282 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:41 compute-0 podman[265080]: 2025-12-02 17:36:41.310597945 +0000 UTC m=+0.129818740 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, release=1214.1726694543, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container)
Dec  2 17:36:41 compute-0 podman[265079]: 2025-12-02 17:36:41.315109035 +0000 UTC m=+0.130627711 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:36:41 compute-0 podman[265081]: 2025-12-02 17:36:41.324206788 +0000 UTC m=+0.134858875 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 17:36:44 compute-0 nova_compute[189459]: 2025-12-02 17:36:44.202 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:45 compute-0 nova_compute[189459]: 2025-12-02 17:36:45.285 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:49 compute-0 nova_compute[189459]: 2025-12-02 17:36:49.206 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:50 compute-0 nova_compute[189459]: 2025-12-02 17:36:50.288 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:51 compute-0 podman[265135]: 2025-12-02 17:36:51.289974135 +0000 UTC m=+0.102566254 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:36:51 compute-0 podman[265136]: 2025-12-02 17:36:51.313137732 +0000 UTC m=+0.114950103 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:36:51 compute-0 podman[265134]: 2025-12-02 17:36:51.340562643 +0000 UTC m=+0.163468296 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 17:36:53 compute-0 nova_compute[189459]: 2025-12-02 17:36:53.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:53 compute-0 nova_compute[189459]: 2025-12-02 17:36:53.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:36:53 compute-0 nova_compute[189459]: 2025-12-02 17:36:53.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:36:53 compute-0 nova_compute[189459]: 2025-12-02 17:36:53.429 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:36:54 compute-0 nova_compute[189459]: 2025-12-02 17:36:54.210 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:54 compute-0 nova_compute[189459]: 2025-12-02 17:36:54.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:55 compute-0 nova_compute[189459]: 2025-12-02 17:36:55.292 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:55 compute-0 nova_compute[189459]: 2025-12-02 17:36:55.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.445 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.824 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.825 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5328MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.825 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.826 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.930 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.931 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.960 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.978 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.981 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:36:57 compute-0 nova_compute[189459]: 2025-12-02 17:36:57.981 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:36:59 compute-0 nova_compute[189459]: 2025-12-02 17:36:59.214 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:36:59 compute-0 podman[203941]: time="2025-12-02T17:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:36:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:36:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Dec  2 17:36:59 compute-0 nova_compute[189459]: 2025-12-02 17:36:59.977 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:00 compute-0 nova_compute[189459]: 2025-12-02 17:37:00.060 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:00 compute-0 nova_compute[189459]: 2025-12-02 17:37:00.294 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: ERROR   17:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: ERROR   17:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: ERROR   17:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: ERROR   17:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: ERROR   17:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:37:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:37:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:37:01.914 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:37:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:37:01.914 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:37:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:37:01.915 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:37:02 compute-0 podman[265207]: 2025-12-02 17:37:02.309698554 +0000 UTC m=+0.138199703 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 17:37:02 compute-0 nova_compute[189459]: 2025-12-02 17:37:02.489 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.062 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.063 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:37:03.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:37:03 compute-0 nova_compute[189459]: 2025-12-02 17:37:03.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:03 compute-0 nova_compute[189459]: 2025-12-02 17:37:03.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:37:04 compute-0 nova_compute[189459]: 2025-12-02 17:37:04.218 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:05 compute-0 nova_compute[189459]: 2025-12-02 17:37:05.298 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:09 compute-0 nova_compute[189459]: 2025-12-02 17:37:09.220 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:09 compute-0 podman[265229]: 2025-12-02 17:37:09.252194562 +0000 UTC m=+0.078217145 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:37:09 compute-0 podman[265230]: 2025-12-02 17:37:09.292805274 +0000 UTC m=+0.112971241 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:37:09 compute-0 nova_compute[189459]: 2025-12-02 17:37:09.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:10 compute-0 nova_compute[189459]: 2025-12-02 17:37:10.303 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:12 compute-0 podman[265269]: 2025-12-02 17:37:12.254587814 +0000 UTC m=+0.077521546 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:37:12 compute-0 podman[265267]: 2025-12-02 17:37:12.297884128 +0000 UTC m=+0.116780622 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible)
Dec  2 17:37:12 compute-0 podman[265268]: 2025-12-02 17:37:12.297916469 +0000 UTC m=+0.112134329 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:37:14 compute-0 nova_compute[189459]: 2025-12-02 17:37:14.223 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:15 compute-0 nova_compute[189459]: 2025-12-02 17:37:15.306 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:19 compute-0 nova_compute[189459]: 2025-12-02 17:37:19.225 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:20 compute-0 nova_compute[189459]: 2025-12-02 17:37:20.310 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:22 compute-0 podman[265325]: 2025-12-02 17:37:22.232395842 +0000 UTC m=+0.064716516 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:37:22 compute-0 podman[265326]: 2025-12-02 17:37:22.264680932 +0000 UTC m=+0.093644606 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:37:22 compute-0 podman[265324]: 2025-12-02 17:37:22.281098199 +0000 UTC m=+0.114836520 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 17:37:24 compute-0 nova_compute[189459]: 2025-12-02 17:37:24.228 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:25 compute-0 nova_compute[189459]: 2025-12-02 17:37:25.313 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:29 compute-0 nova_compute[189459]: 2025-12-02 17:37:29.232 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:29 compute-0 podman[203941]: time="2025-12-02T17:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:37:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:37:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4319 "" "Go-http-client/1.1"
Dec  2 17:37:30 compute-0 nova_compute[189459]: 2025-12-02 17:37:30.317 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: ERROR   17:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: ERROR   17:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: ERROR   17:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: ERROR   17:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: ERROR   17:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:37:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:37:33 compute-0 podman[265398]: 2025-12-02 17:37:33.25591269 +0000 UTC m=+0.088105485 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:37:34 compute-0 nova_compute[189459]: 2025-12-02 17:37:34.233 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:35 compute-0 nova_compute[189459]: 2025-12-02 17:37:35.322 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:39 compute-0 nova_compute[189459]: 2025-12-02 17:37:39.236 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:40 compute-0 podman[265420]: 2025-12-02 17:37:40.302467014 +0000 UTC m=+0.111397002 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:37:40 compute-0 nova_compute[189459]: 2025-12-02 17:37:40.326 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:40 compute-0 podman[265419]: 2025-12-02 17:37:40.335776056 +0000 UTC m=+0.149279425 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 17:37:40 compute-0 nova_compute[189459]: 2025-12-02 17:37:40.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:43 compute-0 podman[265458]: 2025-12-02 17:37:43.266159224 +0000 UTC m=+0.088458913 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:37:43 compute-0 podman[265457]: 2025-12-02 17:37:43.266617097 +0000 UTC m=+0.084061778 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, version=9.4)
Dec  2 17:37:43 compute-0 podman[265456]: 2025-12-02 17:37:43.280758561 +0000 UTC m=+0.110233170 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  2 17:37:44 compute-0 nova_compute[189459]: 2025-12-02 17:37:44.238 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:45 compute-0 nova_compute[189459]: 2025-12-02 17:37:45.331 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:49 compute-0 nova_compute[189459]: 2025-12-02 17:37:49.244 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:50 compute-0 nova_compute[189459]: 2025-12-02 17:37:50.335 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:53 compute-0 podman[265510]: 2025-12-02 17:37:53.286867775 +0000 UTC m=+0.109595272 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:37:53 compute-0 podman[265511]: 2025-12-02 17:37:53.298071012 +0000 UTC m=+0.115763076 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 17:37:53 compute-0 podman[265509]: 2025-12-02 17:37:53.31384159 +0000 UTC m=+0.139819773 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:37:53 compute-0 nova_compute[189459]: 2025-12-02 17:37:53.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:53 compute-0 nova_compute[189459]: 2025-12-02 17:37:53.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:37:53 compute-0 nova_compute[189459]: 2025-12-02 17:37:53.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:37:53 compute-0 nova_compute[189459]: 2025-12-02 17:37:53.437 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:37:54 compute-0 nova_compute[189459]: 2025-12-02 17:37:54.257 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:54 compute-0 nova_compute[189459]: 2025-12-02 17:37:54.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:55 compute-0 nova_compute[189459]: 2025-12-02 17:37:55.338 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:55 compute-0 nova_compute[189459]: 2025-12-02 17:37:55.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.452 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.452 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.452 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.453 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.929 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.930 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5327MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.931 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:37:57 compute-0 nova_compute[189459]: 2025-12-02 17:37:57.931 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:37:59 compute-0 nova_compute[189459]: 2025-12-02 17:37:59.251 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:37:59 compute-0 podman[203941]: time="2025-12-02T17:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:37:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:37:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4322 "" "Go-http-client/1.1"
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.341 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.715 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.715 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.842 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.867 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.869 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:38:00 compute-0 nova_compute[189459]: 2025-12-02 17:38:00.869 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: ERROR   17:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: ERROR   17:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: ERROR   17:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: ERROR   17:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: ERROR   17:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:38:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:38:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:01.915 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:38:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:01.916 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:38:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:01.916 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:38:03 compute-0 nova_compute[189459]: 2025-12-02 17:38:03.871 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:04 compute-0 nova_compute[189459]: 2025-12-02 17:38:04.254 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:04 compute-0 podman[265578]: 2025-12-02 17:38:04.268101378 +0000 UTC m=+0.094130024 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git)
Dec  2 17:38:04 compute-0 nova_compute[189459]: 2025-12-02 17:38:04.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:04 compute-0 nova_compute[189459]: 2025-12-02 17:38:04.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:04 compute-0 nova_compute[189459]: 2025-12-02 17:38:04.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:38:05 compute-0 nova_compute[189459]: 2025-12-02 17:38:05.345 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:09 compute-0 nova_compute[189459]: 2025-12-02 17:38:09.258 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:10 compute-0 nova_compute[189459]: 2025-12-02 17:38:10.347 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:11 compute-0 podman[265599]: 2025-12-02 17:38:11.26209892 +0000 UTC m=+0.080584025 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:38:11 compute-0 podman[265600]: 2025-12-02 17:38:11.277066237 +0000 UTC m=+0.085500806 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 17:38:11 compute-0 nova_compute[189459]: 2025-12-02 17:38:11.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:14 compute-0 podman[265637]: 2025-12-02 17:38:14.260598832 +0000 UTC m=+0.084800697 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:38:14 compute-0 podman[265639]: 2025-12-02 17:38:14.26316397 +0000 UTC m=+0.078683335 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 17:38:14 compute-0 nova_compute[189459]: 2025-12-02 17:38:14.266 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:14 compute-0 podman[265638]: 2025-12-02 17:38:14.275335073 +0000 UTC m=+0.092435800 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9)
Dec  2 17:38:15 compute-0 nova_compute[189459]: 2025-12-02 17:38:15.352 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:16 compute-0 nova_compute[189459]: 2025-12-02 17:38:16.097 189463 DEBUG oslo_concurrency.processutils [None req-d1cc1b97-5008-48d0-b431-0b5e09ca062a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 17:38:16 compute-0 nova_compute[189459]: 2025-12-02 17:38:16.145 189463 DEBUG oslo_concurrency.processutils [None req-d1cc1b97-5008-48d0-b431-0b5e09ca062a 91c12bcb1ad14b95b1bdedf7527f1adf 2f96d47197fa40f2a7126bf626847d74 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 17:38:19 compute-0 nova_compute[189459]: 2025-12-02 17:38:19.334 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:20 compute-0 nova_compute[189459]: 2025-12-02 17:38:20.355 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:23 compute-0 nova_compute[189459]: 2025-12-02 17:38:23.936 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:23.935 106835 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:6d:9c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '36:d9:3c:1f:19:7c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 17:38:23 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:23.938 106835 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 17:38:24 compute-0 podman[265693]: 2025-12-02 17:38:24.291436902 +0000 UTC m=+0.103152793 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:38:24 compute-0 podman[265694]: 2025-12-02 17:38:24.301582951 +0000 UTC m=+0.106487122 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:38:24 compute-0 nova_compute[189459]: 2025-12-02 17:38:24.336 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:24 compute-0 podman[265692]: 2025-12-02 17:38:24.341051126 +0000 UTC m=+0.160981375 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  2 17:38:25 compute-0 nova_compute[189459]: 2025-12-02 17:38:25.359 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:29 compute-0 nova_compute[189459]: 2025-12-02 17:38:29.339 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:29 compute-0 podman[203941]: time="2025-12-02T17:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:38:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:38:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4323 "" "Go-http-client/1.1"
Dec  2 17:38:30 compute-0 nova_compute[189459]: 2025-12-02 17:38:30.364 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: ERROR   17:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: ERROR   17:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: ERROR   17:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: ERROR   17:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: ERROR   17:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:38:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:38:33 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:38:33.942 106835 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=000c10a1-5e88-4874-8132-a124d4da5271, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 17:38:34 compute-0 nova_compute[189459]: 2025-12-02 17:38:34.342 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:35 compute-0 podman[265759]: 2025-12-02 17:38:35.308576628 +0000 UTC m=+0.125912606 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Dec  2 17:38:35 compute-0 nova_compute[189459]: 2025-12-02 17:38:35.369 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:39 compute-0 nova_compute[189459]: 2025-12-02 17:38:39.345 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:40 compute-0 nova_compute[189459]: 2025-12-02 17:38:40.373 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:41 compute-0 nova_compute[189459]: 2025-12-02 17:38:41.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:42 compute-0 podman[265778]: 2025-12-02 17:38:42.312551444 +0000 UTC m=+0.135849689 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4)
Dec  2 17:38:42 compute-0 podman[265779]: 2025-12-02 17:38:42.332169014 +0000 UTC m=+0.150158769 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:38:44 compute-0 nova_compute[189459]: 2025-12-02 17:38:44.348 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:44 compute-0 podman[265819]: 2025-12-02 17:38:44.814851403 +0000 UTC m=+0.098062389 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:38:44 compute-0 podman[265817]: 2025-12-02 17:38:44.821104398 +0000 UTC m=+0.126491731 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 17:38:44 compute-0 podman[265818]: 2025-12-02 17:38:44.855113839 +0000 UTC m=+0.152253024 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec  2 17:38:45 compute-0 nova_compute[189459]: 2025-12-02 17:38:45.378 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:49 compute-0 nova_compute[189459]: 2025-12-02 17:38:49.352 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:50 compute-0 nova_compute[189459]: 2025-12-02 17:38:50.382 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:53 compute-0 nova_compute[189459]: 2025-12-02 17:38:53.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:53 compute-0 nova_compute[189459]: 2025-12-02 17:38:53.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:38:53 compute-0 nova_compute[189459]: 2025-12-02 17:38:53.413 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:38:53 compute-0 nova_compute[189459]: 2025-12-02 17:38:53.451 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:38:54 compute-0 nova_compute[189459]: 2025-12-02 17:38:54.355 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:55 compute-0 podman[265874]: 2025-12-02 17:38:55.287594057 +0000 UTC m=+0.092447229 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:38:55 compute-0 podman[265875]: 2025-12-02 17:38:55.307773752 +0000 UTC m=+0.105616329 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:38:55 compute-0 podman[265873]: 2025-12-02 17:38:55.35564077 +0000 UTC m=+0.169746997 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:38:55 compute-0 nova_compute[189459]: 2025-12-02 17:38:55.384 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:56 compute-0 nova_compute[189459]: 2025-12-02 17:38:56.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:56 compute-0 nova_compute[189459]: 2025-12-02 17:38:56.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.443 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.444 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.444 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.950 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.952 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5332MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.952 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:38:57 compute-0 nova_compute[189459]: 2025-12-02 17:38:57.953 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.131 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.131 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.247 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.261 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.262 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:38:58 compute-0 nova_compute[189459]: 2025-12-02 17:38:58.262 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:38:59 compute-0 nova_compute[189459]: 2025-12-02 17:38:59.358 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:38:59 compute-0 podman[203941]: time="2025-12-02T17:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:38:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:38:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4323 "" "Go-http-client/1.1"
Dec  2 17:39:00 compute-0 nova_compute[189459]: 2025-12-02 17:39:00.388 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:01 compute-0 nova_compute[189459]: 2025-12-02 17:39:01.264 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: ERROR   17:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: ERROR   17:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: ERROR   17:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: ERROR   17:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: ERROR   17:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:39:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:39:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:39:01.916 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:39:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:39:01.917 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:39:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:39:01.917 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:39:02 compute-0 nova_compute[189459]: 2025-12-02 17:39:02.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.063 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.064 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e888fe0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:39:03.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:39:04 compute-0 nova_compute[189459]: 2025-12-02 17:39:04.361 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:05 compute-0 nova_compute[189459]: 2025-12-02 17:39:05.392 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:05 compute-0 nova_compute[189459]: 2025-12-02 17:39:05.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:05 compute-0 nova_compute[189459]: 2025-12-02 17:39:05.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:39:06 compute-0 podman[265945]: 2025-12-02 17:39:06.333481333 +0000 UTC m=+0.152215642 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Dec  2 17:39:06 compute-0 nova_compute[189459]: 2025-12-02 17:39:06.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:09 compute-0 nova_compute[189459]: 2025-12-02 17:39:09.364 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:10 compute-0 nova_compute[189459]: 2025-12-02 17:39:10.397 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:12 compute-0 nova_compute[189459]: 2025-12-02 17:39:12.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:13 compute-0 podman[265967]: 2025-12-02 17:39:13.304730073 +0000 UTC m=+0.127492178 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:39:13 compute-0 podman[265968]: 2025-12-02 17:39:13.313974888 +0000 UTC m=+0.118407557 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  2 17:39:14 compute-0 nova_compute[189459]: 2025-12-02 17:39:14.366 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:15 compute-0 podman[266007]: 2025-12-02 17:39:15.297065216 +0000 UTC m=+0.101994213 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:39:15 compute-0 podman[266006]: 2025-12-02 17:39:15.302086839 +0000 UTC m=+0.107060637 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:39:15 compute-0 podman[266005]: 2025-12-02 17:39:15.309475004 +0000 UTC m=+0.121781396 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 17:39:15 compute-0 nova_compute[189459]: 2025-12-02 17:39:15.402 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:19 compute-0 nova_compute[189459]: 2025-12-02 17:39:19.369 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:20 compute-0 nova_compute[189459]: 2025-12-02 17:39:20.406 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:24 compute-0 nova_compute[189459]: 2025-12-02 17:39:24.374 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:25 compute-0 nova_compute[189459]: 2025-12-02 17:39:25.410 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:26 compute-0 podman[266062]: 2025-12-02 17:39:26.288445318 +0000 UTC m=+0.099024944 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:39:26 compute-0 podman[266063]: 2025-12-02 17:39:26.329711231 +0000 UTC m=+0.131853784 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:39:26 compute-0 podman[266061]: 2025-12-02 17:39:26.363333881 +0000 UTC m=+0.179451274 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec  2 17:39:29 compute-0 nova_compute[189459]: 2025-12-02 17:39:29.377 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:29 compute-0 podman[203941]: time="2025-12-02T17:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:39:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:39:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Dec  2 17:39:30 compute-0 nova_compute[189459]: 2025-12-02 17:39:30.414 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: ERROR   17:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: ERROR   17:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: ERROR   17:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: ERROR   17:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: ERROR   17:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:39:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:39:34 compute-0 nova_compute[189459]: 2025-12-02 17:39:34.381 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:35 compute-0 nova_compute[189459]: 2025-12-02 17:39:35.419 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:37 compute-0 podman[266130]: 2025-12-02 17:39:37.309471804 +0000 UTC m=+0.131979237 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec  2 17:39:39 compute-0 nova_compute[189459]: 2025-12-02 17:39:39.384 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:40 compute-0 nova_compute[189459]: 2025-12-02 17:39:40.423 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:42 compute-0 nova_compute[189459]: 2025-12-02 17:39:42.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:44 compute-0 podman[266150]: 2025-12-02 17:39:44.311172179 +0000 UTC m=+0.120422571 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 17:39:44 compute-0 podman[266151]: 2025-12-02 17:39:44.332066273 +0000 UTC m=+0.136219200 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:39:44 compute-0 nova_compute[189459]: 2025-12-02 17:39:44.387 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:45 compute-0 nova_compute[189459]: 2025-12-02 17:39:45.427 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:46 compute-0 podman[266189]: 2025-12-02 17:39:46.300447329 +0000 UTC m=+0.117673418 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec  2 17:39:46 compute-0 podman[266188]: 2025-12-02 17:39:46.307897327 +0000 UTC m=+0.122415084 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  2 17:39:46 compute-0 podman[266190]: 2025-12-02 17:39:46.317883761 +0000 UTC m=+0.120941974 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  2 17:39:49 compute-0 nova_compute[189459]: 2025-12-02 17:39:49.389 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:50 compute-0 nova_compute[189459]: 2025-12-02 17:39:50.430 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:54 compute-0 nova_compute[189459]: 2025-12-02 17:39:54.393 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:55 compute-0 nova_compute[189459]: 2025-12-02 17:39:55.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:55 compute-0 nova_compute[189459]: 2025-12-02 17:39:55.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:39:55 compute-0 nova_compute[189459]: 2025-12-02 17:39:55.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:39:55 compute-0 nova_compute[189459]: 2025-12-02 17:39:55.430 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:39:55 compute-0 nova_compute[189459]: 2025-12-02 17:39:55.433 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:56 compute-0 nova_compute[189459]: 2025-12-02 17:39:56.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:57 compute-0 podman[266243]: 2025-12-02 17:39:57.262492385 +0000 UTC m=+0.074969407 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 17:39:57 compute-0 podman[266244]: 2025-12-02 17:39:57.276718482 +0000 UTC m=+0.086892183 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:39:57 compute-0 podman[266242]: 2025-12-02 17:39:57.346092809 +0000 UTC m=+0.160469361 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 17:39:57 compute-0 nova_compute[189459]: 2025-12-02 17:39:57.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.454 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.455 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.455 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.455 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.858 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.859 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5321MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.860 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.860 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.943 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.944 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.976 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.990 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.993 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:39:58 compute-0 nova_compute[189459]: 2025-12-02 17:39:58.994 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:39:59 compute-0 nova_compute[189459]: 2025-12-02 17:39:59.396 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:39:59 compute-0 podman[203941]: time="2025-12-02T17:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:39:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:39:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4324 "" "Go-http-client/1.1"
Dec  2 17:39:59 compute-0 nova_compute[189459]: 2025-12-02 17:39:59.996 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:00 compute-0 nova_compute[189459]: 2025-12-02 17:40:00.437 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: ERROR   17:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: ERROR   17:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: ERROR   17:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: ERROR   17:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: ERROR   17:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:40:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:40:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:40:01.918 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:40:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:40:01.918 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:40:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:40:01.919 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:40:02 compute-0 nova_compute[189459]: 2025-12-02 17:40:02.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:04 compute-0 nova_compute[189459]: 2025-12-02 17:40:04.399 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:05 compute-0 nova_compute[189459]: 2025-12-02 17:40:05.441 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:06 compute-0 nova_compute[189459]: 2025-12-02 17:40:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:06 compute-0 nova_compute[189459]: 2025-12-02 17:40:06.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:40:07 compute-0 nova_compute[189459]: 2025-12-02 17:40:07.407 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:08 compute-0 podman[266313]: 2025-12-02 17:40:08.297097431 +0000 UTC m=+0.112480960 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, managed_by=edpm_ansible)
Dec  2 17:40:09 compute-0 nova_compute[189459]: 2025-12-02 17:40:09.403 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:10 compute-0 nova_compute[189459]: 2025-12-02 17:40:10.445 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:14 compute-0 nova_compute[189459]: 2025-12-02 17:40:14.406 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:14 compute-0 nova_compute[189459]: 2025-12-02 17:40:14.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:14 compute-0 podman[266335]: 2025-12-02 17:40:14.850127802 +0000 UTC m=+0.134487033 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 17:40:14 compute-0 podman[266334]: 2025-12-02 17:40:14.851117508 +0000 UTC m=+0.142472075 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 17:40:15 compute-0 nova_compute[189459]: 2025-12-02 17:40:15.450 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:17 compute-0 podman[266374]: 2025-12-02 17:40:17.294305861 +0000 UTC m=+0.110364993 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  2 17:40:17 compute-0 podman[266373]: 2025-12-02 17:40:17.303072803 +0000 UTC m=+0.133622399 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 17:40:17 compute-0 podman[266375]: 2025-12-02 17:40:17.308005094 +0000 UTC m=+0.127555009 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:40:19 compute-0 nova_compute[189459]: 2025-12-02 17:40:19.408 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:20 compute-0 nova_compute[189459]: 2025-12-02 17:40:20.455 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:24 compute-0 nova_compute[189459]: 2025-12-02 17:40:24.411 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:25 compute-0 nova_compute[189459]: 2025-12-02 17:40:25.459 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:28 compute-0 podman[266432]: 2025-12-02 17:40:28.28212926 +0000 UTC m=+0.089441711 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:40:28 compute-0 podman[266431]: 2025-12-02 17:40:28.302502339 +0000 UTC m=+0.117635507 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:40:28 compute-0 podman[266430]: 2025-12-02 17:40:28.321323678 +0000 UTC m=+0.140683528 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:40:29 compute-0 nova_compute[189459]: 2025-12-02 17:40:29.415 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:29 compute-0 podman[203941]: time="2025-12-02T17:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:40:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:40:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4324 "" "Go-http-client/1.1"
Dec  2 17:40:30 compute-0 nova_compute[189459]: 2025-12-02 17:40:30.463 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: ERROR   17:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: ERROR   17:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: ERROR   17:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: ERROR   17:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: ERROR   17:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:40:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:40:34 compute-0 nova_compute[189459]: 2025-12-02 17:40:34.416 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:35 compute-0 nova_compute[189459]: 2025-12-02 17:40:35.469 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:39 compute-0 podman[266499]: 2025-12-02 17:40:39.301677828 +0000 UTC m=+0.125360871 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 17:40:39 compute-0 nova_compute[189459]: 2025-12-02 17:40:39.420 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:40 compute-0 nova_compute[189459]: 2025-12-02 17:40:40.473 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:42 compute-0 nova_compute[189459]: 2025-12-02 17:40:42.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:44 compute-0 nova_compute[189459]: 2025-12-02 17:40:44.425 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:45 compute-0 podman[266518]: 2025-12-02 17:40:45.284254241 +0000 UTC m=+0.109954823 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  2 17:40:45 compute-0 podman[266519]: 2025-12-02 17:40:45.296526586 +0000 UTC m=+0.112996354 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  2 17:40:45 compute-0 nova_compute[189459]: 2025-12-02 17:40:45.478 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:48 compute-0 podman[266555]: 2025-12-02 17:40:48.303078231 +0000 UTC m=+0.106738798 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 17:40:48 compute-0 podman[266554]: 2025-12-02 17:40:48.334547124 +0000 UTC m=+0.148623477 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:40:48 compute-0 podman[266556]: 2025-12-02 17:40:48.335003626 +0000 UTC m=+0.130427775 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:40:49 compute-0 nova_compute[189459]: 2025-12-02 17:40:49.427 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:50 compute-0 nova_compute[189459]: 2025-12-02 17:40:50.481 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:54 compute-0 nova_compute[189459]: 2025-12-02 17:40:54.430 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:55 compute-0 nova_compute[189459]: 2025-12-02 17:40:55.485 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:56 compute-0 nova_compute[189459]: 2025-12-02 17:40:56.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:56 compute-0 nova_compute[189459]: 2025-12-02 17:40:56.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:40:56 compute-0 nova_compute[189459]: 2025-12-02 17:40:56.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:40:56 compute-0 nova_compute[189459]: 2025-12-02 17:40:56.438 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:40:56 compute-0 nova_compute[189459]: 2025-12-02 17:40:56.752 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:57 compute-0 nova_compute[189459]: 2025-12-02 17:40:57.415 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:58 compute-0 nova_compute[189459]: 2025-12-02 17:40:58.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:40:59 compute-0 podman[266608]: 2025-12-02 17:40:59.316174427 +0000 UTC m=+0.122070844 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:40:59 compute-0 podman[266607]: 2025-12-02 17:40:59.322191707 +0000 UTC m=+0.127391366 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:40:59 compute-0 podman[266606]: 2025-12-02 17:40:59.362514115 +0000 UTC m=+0.177237796 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 17:40:59 compute-0 nova_compute[189459]: 2025-12-02 17:40:59.432 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:40:59 compute-0 podman[203941]: time="2025-12-02T17:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:40:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:40:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.453 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.454 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.455 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.455 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.489 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.994 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.995 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5333MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.996 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:41:00 compute-0 nova_compute[189459]: 2025-12-02 17:41:00.996 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.109 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.110 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.129 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing inventories for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.151 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating ProviderTree inventory for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.152 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Updating inventory in ProviderTree for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.165 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing aggregate associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.188 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Refreshing trait associations for resource provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.224 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.241 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.244 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:41:01 compute-0 nova_compute[189459]: 2025-12-02 17:41:01.244 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: ERROR   17:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: ERROR   17:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: ERROR   17:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: ERROR   17:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: ERROR   17:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:41:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:41:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:41:01.919 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:41:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:41:01.920 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:41:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:41:01.920 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.064 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.065 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8ad760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:41:03.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:41:03 compute-0 nova_compute[189459]: 2025-12-02 17:41:03.242 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:03 compute-0 nova_compute[189459]: 2025-12-02 17:41:03.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:04 compute-0 nova_compute[189459]: 2025-12-02 17:41:04.435 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:05 compute-0 nova_compute[189459]: 2025-12-02 17:41:05.493 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:08 compute-0 nova_compute[189459]: 2025-12-02 17:41:08.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:08 compute-0 nova_compute[189459]: 2025-12-02 17:41:08.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:41:09 compute-0 nova_compute[189459]: 2025-12-02 17:41:09.406 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:09 compute-0 nova_compute[189459]: 2025-12-02 17:41:09.436 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:10 compute-0 podman[266677]: 2025-12-02 17:41:10.300287628 +0000 UTC m=+0.118636293 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 17:41:10 compute-0 nova_compute[189459]: 2025-12-02 17:41:10.498 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:12 compute-0 nova_compute[189459]: 2025-12-02 17:41:12.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:14 compute-0 nova_compute[189459]: 2025-12-02 17:41:14.440 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:15 compute-0 nova_compute[189459]: 2025-12-02 17:41:15.427 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:15 compute-0 nova_compute[189459]: 2025-12-02 17:41:15.502 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:16 compute-0 podman[266698]: 2025-12-02 17:41:16.302220422 +0000 UTC m=+0.115555262 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 17:41:16 compute-0 podman[266697]: 2025-12-02 17:41:16.322064827 +0000 UTC m=+0.137943064 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 17:41:19 compute-0 podman[266741]: 2025-12-02 17:41:19.315521916 +0000 UTC m=+0.114387181 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:41:19 compute-0 podman[266740]: 2025-12-02 17:41:19.334719584 +0000 UTC m=+0.138590952 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  2 17:41:19 compute-0 podman[266739]: 2025-12-02 17:41:19.352994568 +0000 UTC m=+0.168119314 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 17:41:19 compute-0 nova_compute[189459]: 2025-12-02 17:41:19.443 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:20 compute-0 nova_compute[189459]: 2025-12-02 17:41:20.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:20 compute-0 nova_compute[189459]: 2025-12-02 17:41:20.409 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 17:41:20 compute-0 nova_compute[189459]: 2025-12-02 17:41:20.428 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 17:41:20 compute-0 nova_compute[189459]: 2025-12-02 17:41:20.506 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:24 compute-0 nova_compute[189459]: 2025-12-02 17:41:24.445 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:25 compute-0 nova_compute[189459]: 2025-12-02 17:41:25.509 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:29 compute-0 nova_compute[189459]: 2025-12-02 17:41:29.448 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:29 compute-0 podman[203941]: time="2025-12-02T17:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:41:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:41:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Dec  2 17:41:30 compute-0 podman[266796]: 2025-12-02 17:41:30.300902816 +0000 UTC m=+0.110050786 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 17:41:30 compute-0 podman[266797]: 2025-12-02 17:41:30.313008957 +0000 UTC m=+0.115688865 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 17:41:30 compute-0 podman[266795]: 2025-12-02 17:41:30.371181448 +0000 UTC m=+0.189287055 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  2 17:41:30 compute-0 nova_compute[189459]: 2025-12-02 17:41:30.513 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: ERROR   17:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: ERROR   17:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: ERROR   17:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: ERROR   17:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: ERROR   17:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:41:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:41:32 compute-0 nova_compute[189459]: 2025-12-02 17:41:32.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:32 compute-0 nova_compute[189459]: 2025-12-02 17:41:32.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 17:41:34 compute-0 nova_compute[189459]: 2025-12-02 17:41:34.449 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:35 compute-0 nova_compute[189459]: 2025-12-02 17:41:35.517 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:39 compute-0 nova_compute[189459]: 2025-12-02 17:41:39.451 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:40 compute-0 nova_compute[189459]: 2025-12-02 17:41:40.522 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:41 compute-0 podman[266864]: 2025-12-02 17:41:41.321232744 +0000 UTC m=+0.142429443 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  2 17:41:42 compute-0 nova_compute[189459]: 2025-12-02 17:41:42.425 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:44 compute-0 nova_compute[189459]: 2025-12-02 17:41:44.453 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:45 compute-0 nova_compute[189459]: 2025-12-02 17:41:45.526 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:47 compute-0 podman[266884]: 2025-12-02 17:41:47.293206194 +0000 UTC m=+0.108150775 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  2 17:41:47 compute-0 podman[266885]: 2025-12-02 17:41:47.31831712 +0000 UTC m=+0.123673227 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 17:41:49 compute-0 nova_compute[189459]: 2025-12-02 17:41:49.458 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:50 compute-0 podman[266926]: 2025-12-02 17:41:50.312731623 +0000 UTC m=+0.109973353 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 17:41:50 compute-0 podman[266924]: 2025-12-02 17:41:50.339148213 +0000 UTC m=+0.154158344 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 17:41:50 compute-0 podman[266925]: 2025-12-02 17:41:50.347077753 +0000 UTC m=+0.162749711 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, release=1214.1726694543, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Dec  2 17:41:50 compute-0 nova_compute[189459]: 2025-12-02 17:41:50.528 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:52 compute-0 nova_compute[189459]: 2025-12-02 17:41:52.729 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:54 compute-0 nova_compute[189459]: 2025-12-02 17:41:54.460 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:55 compute-0 nova_compute[189459]: 2025-12-02 17:41:55.532 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:57 compute-0 nova_compute[189459]: 2025-12-02 17:41:57.436 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:57 compute-0 nova_compute[189459]: 2025-12-02 17:41:57.437 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:41:57 compute-0 nova_compute[189459]: 2025-12-02 17:41:57.437 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:41:57 compute-0 nova_compute[189459]: 2025-12-02 17:41:57.455 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:41:57 compute-0 nova_compute[189459]: 2025-12-02 17:41:57.457 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:41:59 compute-0 nova_compute[189459]: 2025-12-02 17:41:59.464 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:41:59 compute-0 podman[203941]: time="2025-12-02T17:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:41:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:41:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4330 "" "Go-http-client/1.1"
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.536 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.641 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.642 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.642 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:42:00 compute-0 nova_compute[189459]: 2025-12-02 17:42:00.643 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.106 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.107 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5341MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.107 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.108 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.169 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.170 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.199 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.215 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.216 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:42:01 compute-0 nova_compute[189459]: 2025-12-02 17:42:01.216 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:42:01 compute-0 podman[266978]: 2025-12-02 17:42:01.257210372 +0000 UTC m=+0.085746912 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:42:01 compute-0 podman[266979]: 2025-12-02 17:42:01.302622755 +0000 UTC m=+0.118493730 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:42:01 compute-0 podman[266977]: 2025-12-02 17:42:01.325486711 +0000 UTC m=+0.149182933 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: ERROR   17:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: ERROR   17:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: ERROR   17:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: ERROR   17:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: ERROR   17:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:42:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:42:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:42:01.920 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:42:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:42:01.921 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:42:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:42:01.921 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:42:04 compute-0 nova_compute[189459]: 2025-12-02 17:42:04.465 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:05 compute-0 nova_compute[189459]: 2025-12-02 17:42:05.216 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:05 compute-0 nova_compute[189459]: 2025-12-02 17:42:05.539 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:09 compute-0 nova_compute[189459]: 2025-12-02 17:42:09.408 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:09 compute-0 nova_compute[189459]: 2025-12-02 17:42:09.468 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:10 compute-0 nova_compute[189459]: 2025-12-02 17:42:10.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:10 compute-0 nova_compute[189459]: 2025-12-02 17:42:10.410 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:42:10 compute-0 nova_compute[189459]: 2025-12-02 17:42:10.543 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:12 compute-0 podman[267048]: 2025-12-02 17:42:12.327204644 +0000 UTC m=+0.141889849 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec  2 17:42:14 compute-0 nova_compute[189459]: 2025-12-02 17:42:14.474 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:15 compute-0 nova_compute[189459]: 2025-12-02 17:42:15.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:15 compute-0 nova_compute[189459]: 2025-12-02 17:42:15.547 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:18 compute-0 podman[267068]: 2025-12-02 17:42:18.297125063 +0000 UTC m=+0.107848097 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  2 17:42:18 compute-0 podman[267067]: 2025-12-02 17:42:18.303304066 +0000 UTC m=+0.115671504 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:42:19 compute-0 nova_compute[189459]: 2025-12-02 17:42:19.482 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:20 compute-0 nova_compute[189459]: 2025-12-02 17:42:20.553 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:21 compute-0 podman[267110]: 2025-12-02 17:42:21.296618332 +0000 UTC m=+0.095167242 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 17:42:21 compute-0 podman[267109]: 2025-12-02 17:42:21.300900615 +0000 UTC m=+0.096738833 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 17:42:21 compute-0 podman[267108]: 2025-12-02 17:42:21.301438929 +0000 UTC m=+0.103405859 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 17:42:24 compute-0 nova_compute[189459]: 2025-12-02 17:42:24.486 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:25 compute-0 nova_compute[189459]: 2025-12-02 17:42:25.556 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:29 compute-0 nova_compute[189459]: 2025-12-02 17:42:29.491 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:29 compute-0 podman[203941]: time="2025-12-02T17:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:42:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:42:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Dec  2 17:42:30 compute-0 nova_compute[189459]: 2025-12-02 17:42:30.559 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: ERROR   17:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: ERROR   17:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: ERROR   17:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: ERROR   17:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: ERROR   17:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:42:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:42:32 compute-0 podman[267162]: 2025-12-02 17:42:32.290010866 +0000 UTC m=+0.101417317 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 17:42:32 compute-0 podman[267163]: 2025-12-02 17:42:32.301253004 +0000 UTC m=+0.093551829 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 17:42:32 compute-0 podman[267161]: 2025-12-02 17:42:32.361288704 +0000 UTC m=+0.168513465 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:42:34 compute-0 nova_compute[189459]: 2025-12-02 17:42:34.495 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:35 compute-0 nova_compute[189459]: 2025-12-02 17:42:35.564 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:39 compute-0 nova_compute[189459]: 2025-12-02 17:42:39.499 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:40 compute-0 nova_compute[189459]: 2025-12-02 17:42:40.567 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:43 compute-0 podman[267232]: 2025-12-02 17:42:43.296229169 +0000 UTC m=+0.123508433 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Dec  2 17:42:43 compute-0 nova_compute[189459]: 2025-12-02 17:42:43.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:44 compute-0 nova_compute[189459]: 2025-12-02 17:42:44.501 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:44 compute-0 podman[203941]: time="2025-12-02T17:42:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:42:44 compute-0 podman[203941]: @ - - [02/Dec/2025:17:42:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 28679 "" "Go-http-client/1.1"
Dec  2 17:42:45 compute-0 nova_compute[189459]: 2025-12-02 17:42:45.571 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:49 compute-0 podman[267253]: 2025-12-02 17:42:49.271468785 +0000 UTC m=+0.094342960 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 17:42:49 compute-0 podman[267254]: 2025-12-02 17:42:49.301529121 +0000 UTC m=+0.117382980 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  2 17:42:49 compute-0 nova_compute[189459]: 2025-12-02 17:42:49.507 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:50 compute-0 nova_compute[189459]: 2025-12-02 17:42:50.575 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:52 compute-0 podman[267293]: 2025-12-02 17:42:52.285850498 +0000 UTC m=+0.085821184 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  2 17:42:52 compute-0 podman[267292]: 2025-12-02 17:42:52.304157513 +0000 UTC m=+0.105885876 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Dec  2 17:42:52 compute-0 podman[267291]: 2025-12-02 17:42:52.306528725 +0000 UTC m=+0.116514387 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 17:42:54 compute-0 nova_compute[189459]: 2025-12-02 17:42:54.508 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:55 compute-0 nova_compute[189459]: 2025-12-02 17:42:55.578 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:57 compute-0 nova_compute[189459]: 2025-12-02 17:42:57.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:59 compute-0 nova_compute[189459]: 2025-12-02 17:42:59.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:42:59 compute-0 nova_compute[189459]: 2025-12-02 17:42:59.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 17:42:59 compute-0 nova_compute[189459]: 2025-12-02 17:42:59.412 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 17:42:59 compute-0 nova_compute[189459]: 2025-12-02 17:42:59.435 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 17:42:59 compute-0 nova_compute[189459]: 2025-12-02 17:42:59.512 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:42:59 compute-0 podman[203941]: time="2025-12-02T17:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:42:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:42:59 compute-0 podman[203941]: @ - - [02/Dec/2025:17:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4323 "" "Go-http-client/1.1"
Dec  2 17:43:00 compute-0 nova_compute[189459]: 2025-12-02 17:43:00.582 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: ERROR   17:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: ERROR   17:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: ERROR   17:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: ERROR   17:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: ERROR   17:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:43:01 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:43:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:43:01.921 106835 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:43:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:43:01.922 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:43:01 compute-0 ovn_metadata_agent[106830]: 2025-12-02 17:43:01.922 106835 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.453 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.454 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.454 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.455 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.918 189463 WARNING nova.virt.libvirt.driver [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.920 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5330MB free_disk=72.12247467041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.920 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 17:43:02 compute-0 nova_compute[189459]: 2025-12-02 17:43:02.921 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.066 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.066 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f007fda0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda0110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ff0a9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd231d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 rsyslogd[236995]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007ffb22a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f007fda00e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f0081d16840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f007fd21760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f007fd230e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd232f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f007fd23200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd233b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fda03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f007fd23260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f007ff0a330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f007fd232c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f007fd23320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f007fd23380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f007fda0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f007fd233e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f007fd23770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f007fd23a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f007fd23440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f007fd23c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd234d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd236e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f007fd23fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f007e8704a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f007fd234a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f007fd23ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f007fd23d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f007fd23dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f007fd23e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f007fd236b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f007fd23ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f007fd23710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f007fd23f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f007feb7b00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 ceilometer_agent_compute[200189]: 2025-12-02 17:43:03.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.123 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.124 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.226 189463 DEBUG nova.compute.provider_tree [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed in ProviderTree for provider: 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 17:43:03 compute-0 podman[267351]: 2025-12-02 17:43:03.288845157 +0000 UTC m=+0.089319767 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:43:03 compute-0 podman[267350]: 2025-12-02 17:43:03.288856528 +0000 UTC m=+0.099505277 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 17:43:03 compute-0 podman[267349]: 2025-12-02 17:43:03.326665209 +0000 UTC m=+0.144586151 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.589 189463 DEBUG nova.scheduler.client.report [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Inventory has not changed for provider 9fd1b4c0-b7de-4b88-8041-4e819fca48c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.591 189463 DEBUG nova.compute.resource_tracker [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 17:43:03 compute-0 nova_compute[189459]: 2025-12-02 17:43:03.591 189463 DEBUG oslo_concurrency.lockutils [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 17:43:04 compute-0 nova_compute[189459]: 2025-12-02 17:43:04.516 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:04 compute-0 nova_compute[189459]: 2025-12-02 17:43:04.587 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:05 compute-0 nova_compute[189459]: 2025-12-02 17:43:05.587 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:06 compute-0 nova_compute[189459]: 2025-12-02 17:43:06.410 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:09 compute-0 nova_compute[189459]: 2025-12-02 17:43:09.519 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:10 compute-0 nova_compute[189459]: 2025-12-02 17:43:10.589 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:11 compute-0 nova_compute[189459]: 2025-12-02 17:43:11.405 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:12 compute-0 nova_compute[189459]: 2025-12-02 17:43:12.409 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:12 compute-0 nova_compute[189459]: 2025-12-02 17:43:12.411 189463 DEBUG nova.compute.manager [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 17:43:14 compute-0 podman[267413]: 2025-12-02 17:43:14.343955278 +0000 UTC m=+0.164333794 container health_status dcbfe8a4e0ff1038f5ba14bd39d573212a151b2d7c11866312e00788cad970de (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec  2 17:43:14 compute-0 nova_compute[189459]: 2025-12-02 17:43:14.523 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:15 compute-0 nova_compute[189459]: 2025-12-02 17:43:15.592 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:17 compute-0 nova_compute[189459]: 2025-12-02 17:43:17.411 189463 DEBUG oslo_service.periodic_task [None req-d32cbaf0-edc0-4bb7-9a54-850a55c3e70c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 17:43:19 compute-0 nova_compute[189459]: 2025-12-02 17:43:19.526 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:20 compute-0 podman[267434]: 2025-12-02 17:43:20.295940789 +0000 UTC m=+0.104924090 container health_status 842d35422845bd8ca41afd8c6b89356002eb66dfc6ab7a368fa3ae0b0e93036c (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 17:43:20 compute-0 podman[267435]: 2025-12-02 17:43:20.304952317 +0000 UTC m=+0.112288895 container health_status 92c08b6e4763a52fc2f3255fa982ae1864e18633b23c43e865f7dcd2cc4c6a24 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 17:43:20 compute-0 nova_compute[189459]: 2025-12-02 17:43:20.599 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:23 compute-0 podman[267474]: 2025-12-02 17:43:23.249608304 +0000 UTC m=+0.074955687 container health_status 201e3c8660ac2d779aacd432766cc0ef4e0146ad29eaefd09e2d7a6349513050 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 17:43:23 compute-0 podman[267480]: 2025-12-02 17:43:23.268099164 +0000 UTC m=+0.074890715 container health_status d60ef4d6f27a263693c7473fc3ad301b83547a2e770da7fd6947b04494caa942 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  2 17:43:23 compute-0 podman[267475]: 2025-12-02 17:43:23.276742772 +0000 UTC m=+0.092096080 container health_status 67ff5d4c323f417a0572cfd2458c5b79eea6721c89779af2c77381d53a0d4854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Dec  2 17:43:24 compute-0 nova_compute[189459]: 2025-12-02 17:43:24.528 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:25 compute-0 nova_compute[189459]: 2025-12-02 17:43:25.603 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:27 compute-0 systemd-logind[790]: New session 34 of user zuul.
Dec  2 17:43:27 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec  2 17:43:29 compute-0 nova_compute[189459]: 2025-12-02 17:43:29.530 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:29 compute-0 podman[203941]: time="2025-12-02T17:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 17:43:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  2 17:43:29 compute-0 podman[203941]: @ - - [02/Dec/2025:17:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4317 "" "Go-http-client/1.1"
Dec  2 17:43:30 compute-0 nova_compute[189459]: 2025-12-02 17:43:30.606 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: ERROR   17:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: ERROR   17:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: ERROR   17:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: ERROR   17:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: ERROR   17:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 17:43:31 compute-0 openstack_network_exporter[206093]: 
Dec  2 17:43:32 compute-0 ovs-vsctl[267696]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  2 17:43:33 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  2 17:43:34 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  2 17:43:34 compute-0 virtqemud[189206]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  2 17:43:34 compute-0 podman[267839]: 2025-12-02 17:43:34.28533761 +0000 UTC m=+0.101008847 container health_status 8de432e45acf50efcdc6962d7e64ef0661effd75e19bcfcf00e392d0777969d3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 17:43:34 compute-0 podman[267841]: 2025-12-02 17:43:34.295986962 +0000 UTC m=+0.087793147 container health_status c55c1b518081584d6ed72ee7a95a4a122df4fdc0843f1442cdb3f0095736dd23 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 17:43:34 compute-0 podman[267837]: 2025-12-02 17:43:34.319478804 +0000 UTC m=+0.133938789 container health_status 38330d679c842cde7afa6ec1655b4ac64e1420af4cd09bd101779d066ff793eb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  2 17:43:34 compute-0 nova_compute[189459]: 2025-12-02 17:43:34.533 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:35 compute-0 nova_compute[189459]: 2025-12-02 17:43:35.609 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 17:43:37 compute-0 systemd[1]: Starting Hostname Service...
Dec  2 17:43:38 compute-0 systemd[1]: Started Hostname Service.
Dec  2 17:43:39 compute-0 nova_compute[189459]: 2025-12-02 17:43:39.535 189463 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
